Showing posts with label dev. Show all posts
Showing posts with label dev. Show all posts

Sunday, December 04, 2011

JavaEE 6 app servers compared

Thank you Antonio!

Baseline/platform sizing of different JavaEE containers (disk, ram, startup).

http://agoncal.wordpress.com/2011/10/20/o-java-ee-6-application-servers-where-art-thou/

The more complex metrics of scalability (cpu/mem increase as add more load), performance (first-call as well as high concurrency), and cluster/ha require constants on the OS/hardware/VM/JVM that takes quite a bit more setup and time. At least the above are relatively constant.

Friday, June 03, 2011

hypervisor (vm) and jvm (java) and SLA and costs

I've been testing several approaches to optimize the platform that the applications run on. This blog post is just a brain dump without any clear direction other than current thoughts.

Most of the applications I work it would fall under the equivalent of the JavaEE6 web-profile (jpa/web or jpa/ejb/web) with a couple that have messaging that, in reality, could be modified to work with other async-style approaches (while messaging also supports distributed work efforts, most of the applications aren't reaching a critical mass where then need to distribute that work).

So, what are we talking about platform wise?

*jboss or tomcat (or, more appropriately, the new TomEE as an option)

*jvm

*OS to run it on (preferably with iSCSI and similar large-disk-space mounting support).

*hypervisor to run multiple guest OS/vm/appcontainers.

Some of the general goals are reduce diskspace/memory, maximize the number of applications that can run on a piece of hardware, while still protecting or segregating applications from each-other so if in our haste to 'time to market' an application will only hurt itself and not any others. Failover/disaster-recovery is also a consideration, with a minor emphasis on time-to-increase-capacity-and-associated-downtime but that is not as critical.

App Container

Jboss has been doing some wonderful things with the new jboss7 AS stack. I haven't finished my memory review, but I hope they got the 'memory bloat' under control. Jboss 4.0.x series with one application can run in under 128MB in most cases, while Jboss 5.x and 6.x series for the SAME app need to double-to-triple to 256MB/364MB.

-jboss deployment bonus: The ability to deploy an application's 'configuration' beside it as a SAR in the same deployment directory as the application WITHOUT needing to modify the server itself is HUGE. I do not understand why people do not take more advantage of the SAR benefits. You create your application binary once, then vet/test with one SAR configuration, take the SAME binary to your staging/pre-deploy/uat/stress-testing/etc environments with different SAR configurations, then again move the SAME binary to production with a different SAR configuration. What you tested is what went live.

-And, once you setup the SAR configuration for the environment...leave it there and update the application binary with changes (assuming no additional configurations). The least variables to mess around with the better!

TomEE is a new player and haven't reviewed it yet.

Jonas unfortunately has never gave reason to peak my interest.

Geronimo & Glassfish are additional options, but also do not provide any significant reason to change from Jboss (which I have the most experience/skill in).

Tomcat/Jetty are decent web-only platforms, but would not be considered as part of the strategy related to inability to support the full necessary stacks.

Conclusion: Jboss still in the win, but if Memory is a constraint be wary of jboss5/6 versus the older jboss 4.0.x series. The new Jboss7AS is a significant rewrite and will hopefully address this, as well as additional scenarios.



jvm/os

This is where it gets interesting....

*jboss again comes out with the Boxgrinder project so that you can have predictable/repeatable platforms. This is kind of an outsider as it doesn't directly relate to any of the above areas, but is a way towards combining and using them in a cool (or more predictable...less variables) fashion.

*Azul has their new Zing JVM/OS combo-solution that will run on hypervisors (and is optimized). But, at a price of $5k-$6k per 'server', but I haven't touched/tested/or discussed if a server represents a single JVM that can run multiple appcontainers or not.

*Oracle has a not-very-discussed JVM/OS combo-solution that will also run on hypervisors called Maxine Virtual Edition: http://labs.oracle.com/projects/guestvm/
-GPL licensed/forever open sourced.
-takes queues from openJDK, so will continue to keep updated with recent JDK updates.
-not 'production' ready...if this can get some more steam, this is definately a good place to go.

Away from the cool stuff, and back to reality --

Just Enough Operating System (JEOS) continues to be a buzzword but with no real meat or applied solutions. The Boxgrinder project above does try to help with some pre-defined approaches to a JEOS for the different linux OS distributions. CentOS is still a popular choice for low-cost options, and the guys there are trying there best to get CentOS 6 out the door even while RHEL 6.1 gets released -- if you want the faster turn around, pay for it and get the benefit of testing and security announcements, otherwise free CentOS is free but help them out.


hypervisor (virtualization)

Hypervisor battle is pretty hot right now, with no real clear winner yet.

With Xen and KVM as the current front-runners on the open-source server hypervisor segment (with others close behind), it's not really black and white which one to pick although Xen has a little bit of an edge with Citrix backing and Paravirtualization support.

VMWare, hyper-v (which announced CentOS support?!), and other commercials also offering some competitive advantages over the open source alternatives (for a price).

Tuesday, February 15, 2011

Javamelody performance & usage statistics

One of the hidden gems in the open source world is a project called Javamelody.

I've been using this since late 2009 to help refactor/modify design and code based on usage-based findings. It is not a profiler, not a click-n-fix, not a quickly-fix-your-problems tool. It is a tool to get you the information, over time, that you need to make Strategic decisions about design/code.

http://code.google.com/p/javamelody/

It gets all tiers of statistics within a single application -> the application's UI calls, business (ejb/facade/spring) calls, and sql calls.

Recently I finally submitted a patch for GWT-RPC detailed statistics I've been using for a while to help, again from a strategic point of view, refine some products.

Enjoy!

Monday, August 09, 2010

Eclipse JPA tooling, Hibernate (jboss) tooling

Working on ways to improve the tooling/work environment when in a JPA project.

In the past, pretty much hand-code everything and rely on maven/unit-tests to catch errors.

Quicknote experiences:

* To get JPA Tooling working, need to map the jdbc driver manually/directly to the filesystem jar location through the Eclipse->DataManagement features.

* More on JPA tooling, particularly with maven layout, here: http://www.eclipse.org/forums/index.php?t=msg&goto=508143

* To get Hibernate Tooling working, need to add the jdbc driver to the classpath, EVEN IF you are using Database Connection:JPA project configured option (i.e. see above direct jar filesystem mapping does not carry over to Hibernate Tooling).

* In the persistence.xml, to avoid dealing with a lot of issues, remove JTA requirements. This works for me as the Entity class/domain are in a project seperate from the Session Bean (the Entity Managers), so the Entity class/domain has a non-JTA persistence.xml, while the Session Bean (entity manager) project has a JTA persistence.xml. I hate inconsistencies, but only way this seems to work.


Gains:

* In JPA tooling, immediately checked the model to the database structure, and identified a couple of case-sensitivity issues between the field name and the column name that were easy to fix.
* In Hibernate tooling, can test-run jpa-ql queries to see if they work as expected, timing, and review results. Can also look at the Dynamic SQL Preview to see the actual sql used for future index optimizations.

Monday, July 26, 2010

(CI) Building Eclipse PDE plugins from Maven

...is a pain in the arse.

After evaluating maven-pde-plugin, which one would think would make it easy, turns out not so much.

I've swapped over to using Tycho (because it appears to better support multiple build options, like update sites and RCP apps directly instead of just plugins and features), but that isn't proving trivial even in the most basic sense still.

But, using Tycho 0.9.0 from the ibiblio org.sonatype.tycho groupid (not to be confused with org.codehaus.tycho...or several other groupId's I've run into) you still have issues:

Errors like: "Cannot find lifecycle mapping for packing: 'eclipse-plugin' come up a lot. Looking at the off-chance there is a dependency issue, you are required to use an unstable release version of Maven 3 (as of 7/26/2010 at any rate). Using maven 3.0-beta-1 you now get "Unknown packaging: eclipse-plugin"...so not much help there either.

Searching for help on either of these issues get posts like 'fixed in Tycho 0.5.0', or 'you need to modify how you build from source'...which if you get the binary from a public maven repository one would hope would work as expected (per why most people want to use maven so you DONT run into these issues).

Other people mention 'update m2eclipse'...except I'm running this from the command line for the purpose of eventually moving to Hudson/Continuous Integration. Maybe I mis-understand the purpose of this maven plugin and it must be used in eclipse with m2e?

Please help if you read this!


EDIT: reason for chasing down why I want to automate Eclipse PDE builds is
1) I have an RCP app I would like to migrate over (from Eclipse 3.0 unfortunately)
2) primary reason was to pre-load company JDBC drivers for use in Eclipse (http://www.eclipse.org/forums/index.php?t=msg&goto=549384)


ANSWER: do not assume the 'convention':




WRONG plugin artifactId: maven-tycho-plugin

CORRECT plugin artifactId: tycho-maven-plugin

Friday, April 23, 2010

BigDecimal v Float/float or Double/double for java transport

As I have posted previously, quite often I get involved in some type of financial portion of a solution, or the entirety of the solution is financial.

In java, BigDecimal is where you go for computational accuracy -- but what about if you just need to transport the data?

So, I reviewed information in the Sun/Oracle JDK site, and if you go search and read it, it isn't overly definitive (from a 'do I want to or not use') on float/doubles.

After going through many other posts, mailing list searches, and reviews, I broke down and posted a question here:


I also started doing some manual tests myself, and finally got the 'answer' I was looking for:

float: 9 'locations'
double: 15 'locations'

What are locations? My testing, I found that float can accurately store and retrieve 6 numbers before the decimal, and 3 after....or 3 before/6 after, or any variation of that theme. Similar for double - 9 before/6 after, and other variations.

Needless to say, that's why it is vague as it matters what scale you are storing after the decimal as to how much you can store before the decimal.

So, unless you can get a definitive max value and precision rule for a financial application, you might want to stick with the heavyweight of BigDecimal just to be sure.

----

Edit: I forgot to post *why* I was even looking at this!!

We were having some memory issues with an outsourced application (that lacked pagination), that had a DTO with 12 monetary value field...12 BigDecimals per DTO. The List sizes ranged from 300->2000->40k. The 40k (most extreme) was taking up 45MB of memory! Changing the BigDecimal to float primitive for the 12 fields dropped the same List size down to 15MB (1/3!!!!!).

However, the accuracy needed for this application was not satisfied by float, so although I'm evaluating Double I may opt to play it safe and keep accuracy as more important than saving memory (and, instead, actually paginate the results!).

Thursday, August 06, 2009

least-invasive development improvement

LIDI - Least Invasive Development Improvement (team-oriented)

Attempting to coin a term for what I have been attempting to do for the last 8 years in maturing a very small development team that supports many projects.

Definitions
Small development team defined as under 10 people, including UI, Server, DB, and internal dev QA.

Many projects defined as 10-25 active, supported solutions, with >50% of them being unique solutions, while the rest may be re-tooled/variations of existing solutions.

Small to medium projects defined (from a LOC standpoint) of between 10k and 500k. Most are web-based, but some are thick client. Most are 3-tier/n-tier, some are 2-tier thickclient-DB type solutions.

Key Words
*logging
*unit testing
*performance testing and review
*scalability testing and review
*security testing and review
*configuration management
*runtime management
*runtime dependency checking/management (i.e. notification of issues)
****Business problem solved
****Expectations met

Prefix
I put the last two, business problem solved and expectations met with many asterisks because, like many developers have experienced, doing all the performance/scalability/security testing in the world won't help you if you have to recode/redesign it again and then have to re-do all the performance/scalability/security testing and review again.

Discussion
When working with a small development team that is already fighting with project priority conflicts, short deadlines, short requirements, and constant support and change-requests, the last thing on your or their mind is ADDING more work.

The above term, least-invasive, is on purpose -- there is no free lunch, there is no silver bullet. There will be compromises, but if you maintain a goal of trying to make it as least invasive as possible, and able to show and provide reasons and results that are touchable/matter, you will mature and progress!

Experiences

Step 1: Baseline

I know your first thought - I don't have time to come up with baseline metrics, we are already going nuts! Guess what, I'm *not* talking about baseline metrics! I am talking about getting the development **process* repeatable and stable -- that is your base for everything you do.

*Baseline: Convention

Yes, I borrowed this term from the Maven team. Make sure all your projects follow a similar folder layout, for example all java code is in /src/main/java, all html/jsp is in /src/main/webapp, etc. Get the team to the point where someone can checkout a project they never touched before and be able to know where to go/what to do.

*Baseline: Independent builder

Either find a person who will always be an independent builder, or setup some type of continuous integration system. Having someone/something ELSE do builds than the developers will greatly stabalize the process and document/flush out any outstanding issues you have in the build process. This is a deadly experience I learned from my VB6, and now that I'm in java I've choosen http://hudson.net as an independent build tool, while continuum/cruisecontrol are other alternatives.

*Baseline: Build system/dependency management/versioning

I'm sure you just ran into a snag -- with an independent builder, you are learning there are different ways people are building, or worse, they are relying entirely on an IDE for the builds. Moving to ANT or Maven2 build system in java, in my case Maven2, helped to ensure that the builds are *consistent* and any gotcha's can actually get caught EARLIER than later. Let me say that again with an example - "This maven2 project is not building on my desktop, what a piece of crap." actually translates to "the project doesn't build on JDK1.4/JDK5/Windows/Linux/needs a library I forgot to add, lets fix it now while we're actively on the project instead of when we check it out 6 months later to fix a different issue.".

Maven2 also helps with the dependency management problem, and the versioning problem. If you are always renaming your jars to be mylibrary.jar to include in your application, and you aren't sure which version that library is after-the-fact and trying to identify an issue, you know the problem.

*Baseline: Promotion process

This will be the most difficult baseline to adjust - a promotion process. What I mean by this is, based on my experience that seems to be working, you develop and deploy to a DEV environment. Work out the kinks as you know it. If you are lucky enough to have an internal QA, have them review it on DEV. Then, when things look o.k., *promote* to a STAGING environment (including different DB, server, everything). NEVER make custom tweaks on Staging, instead always modify your promotion, or migration, scripts/process as those migration scripts/process is exactly what you are also testing that will be used when you promote to Live. On Staging, you do UAT/Customer Acceptance, have them push it back if needed, make fixes on DEV, them promote back up to Staging for another review. THEN promote to live.

Step 2: Improve ability to identify and fix basic stuff
What I mean by this, is let the developers use the tools they are already comfortable with. Unit Testing. Diagnostic Logging (or normal logging if you aren't familiar with the different logging types).

Unit Tests: Junit is a great. Nunit exists for the other side. Having a way to test the code is doing what you want to, AND BE ABLE TO REPEATABLY AND AUTOMATICALLY run those tests is the goal. This is not integration testing, just basic module/unit testing that the code is behaving as expected to whatever business expectations can be resolved in the code.

Diagnostic Logging: Making sure your code can log somewhere, that you can retrieve, and provide useful information to make a correction, is this goal. "It broke!" well, you need to know what caused it to break, and the *quicker* you can do that, the more time you'll have for other things. Rather than re-testing manually with system outs, get your logging taken care of. This will not fix all your issues, but if you can get the easy 80% out of the way, that's huge. My experience we are still having some challenges, as there are some custom ways that are already in place, and people have a hard time breaking out of the sysout approach. I think I'm satisfied with using SLF4J, then just letting the Log4j implementation for logging and controlling the log verbosity (and the formatting of the logs....nothing worse than custom logging that has many different outputs, get it consistent!).

Step 3: Your walking, your walking, lets try jogging.
By this point, you should be o.k. now, and taking care of business. Now should be able to look at some more technical things to improve the development process.

Codability: This is where the static code analysis tools come in, that are, again, not that invasive to use. Whether you do it from a report generation standpoint, or integrated into the IDE, tools like PMD, Checkstyle, FindBugs have the potential to ferret out potentially poor code. This is no replacement for peer review in any fashion at all, this is just a convenient way to identify common issues (note: these tools are not stone-cold rules, there are times things have to be coded a certain way).

Testability: Coverage tools like Clover, Emma, Jcoverage can help on your unit testing side to see if you can increase the amount of testing, and catch certain flow-changes (if/else/case/etc) in the code that aren't tested as well right now.

Step 4: The in-deep stuff
Once you reached step 4, you can look at the items I listed back at the top of this page. Notice that I really didn't hit certain items --

Profiling is actually an invasive process most of the time, I haven't found a tool that can easily identify memory issues or performance bottlenecks *for* you, instead they, rightfully, require you to review the information and come to your own conclusions. Tools like TPTP, Jprobe, jprofiler, etc aren't quick-fix tools, you need to learn them and understand them, and are useful in different scenarios.

Multi-tier profiling/review: Tracking from the Web tier, through the server tier (rules/workflow/business logic), to the database tier (sql/db, or sproc) to help identify where in the tiers a particular slowdown or issue is occuring from. Not something easy to do - some tools, like deprecated InfraRED and Glassbox, attempted to make this easier for us, but they don't seem to be active.

Integration testing: Actually being able to do business testing across an entire integrated system programatically, performance testing SOA or the full cycle of pressing a button in the UI are all desireable goals, but not easy to setup and do.

Automated UAT/User Interface testing: Some neat tools, like Selenium, can help step through testing a website and ensure things continue to work as expected. It's a great tool, but if you are constantly making changes, keeping those Selenium tests up to date can get time consuming. Also, need to know they do NOT identify blemishes/non-intuitive interface, only that the interface is continuing to work as expected.

Scalability testing: testing 10-years later equivalent worth of data, testing 5,10,50, 1000 concurrent users, evaluating estimate load ability per setup (proxy/multi-app servers/single db, db clustering, etc). You can also throw in disaster/recovery as part of the scalability testing as well. These are all very manual, very concious development purposes and is definitely invasive and time consuming.


Conclusion
Well, this looks more like a brain-dump than an organized blog, but sometimes just dropping information can be helpful to other people, and could solicit useful feedback!

Post-Edits:
A good article on related subject: http://www.ddj.com/architect/184415470

Wednesday, November 12, 2008

All I want for Christmas 2008 - Full Featured Eclipse Database plugin

Full Featured Database Plugin for Eclipse.

Definition is vague, and of course dependent on whom you talk to. I, personally, am more of a developer than a DBA, so you could instead call what I'm asking for as a 'Developer Database Plugin' versus a 'DBA Database Plugin'.

So why not use SQLexplorer, DTP, Clay, or the several other Eclipse plugins that, yes, I have tested/used?

Because, they are missing:

*Good Source Control Support
*Stored Procedure Support
*Simplified View Support
*Easy multi database support

Good Source Control Support (specifically Subversion)

For me, I immediately think of us spoiled Eclipse programmers that can:

1) See quickly on the navigation tree what items are different than what is in SCM (3 POVs, only worried about what is deployed to the DB server you are pointing to, and what is in SCM. Active Desktop changes should be deployed to the DB server you are pointing to first, then team-shared to SCM if all goes well).

2) Compare/Diff easily/visually between what is in SCM versus what I'm looking at now. I understand the tricky part is there is 3 POVs -- SCM, active desktop, and what is deployed to the server you are pointing to. From my standpoint, you always pull from the 'deployed db server' environment, compare to SCM, then make changes and deploy back to the 'deployed db server'.

3) Everything in SCM. Everything. Entire schema -- tables, stored procedures, functions, indexes, triggers, etc -- everything. Database specific security will be a bit trickier from an 'abstract' database standpoint, so that's ok, but no excuse on anything else.

Stored Procedure Support

This is so blatantly obvious it scares the heck out of me how many plugins don't support this. If it can be read from JDBC, or, more precisely, can be read via 'EXEC ' that could be a simple project to abstract out regardless of underlying database, it should be available from a plugin.

some caveats though --
1) Whatever mechanism is used to retrieve the deployed stored procedures, the -content- should be the same as what is stored to the SCM for easy compare/diff.

2) Keep the editors simple first. This means don't worry about it being t-sql, pl-sql, java, pgsql, etc -- just use simple text editors first, then work on developing more robust editors depending on the native stored procedure language of the underlying database. Work on it later....get this working for now first.


Simplified View Support

I'm not even going to bullet/number this. How many people raise your hands when you had to use DTP to drill down from the Database->Catalog->Schema-> when you thought you had pre-defined those values already in the connection. I'm all for flexibility, but tools like IDE and IDE Plugins are supposed to be designed for you to work effectively, so give us the ability to just look at what we want to look at (even if we have to configure it first).

Easy multi database support

Yes, we all know you can support any database that has a JDBC driver (one case). Yes we know there are specific features (Execution Plans, etc) that require custom code/integration/libraries for specific databases (second use case). But could you make it a little easier to get setup?

1) http://mirrors.ibiblio.org/pub/mirrors/maven2/ --- If you don't want to include the JDBC driver in your distribution, directly point your app to here under the right group id (directory structure) and let the person through your UI just pick a version and automatically download it to the right location for your app. Manually downloading and adding to the /lib, or classpath, is so old-school.

2) see #1. Dont be old school.

3) I really do like seeing more advanced tools like Execution plans, but don't let that be your primary focus unless you are running out of bugs/features. 80% of developers working with databases need all this other 80% functionality first.


Data Modelling

-1)
This is not my list above.
-2) I actually enjoy Data Modelling, but unfortunately when it comes to hitting the ground running, sometimes it gets in the way and you need as direct access to the actual database you are working with (the specific MS SQL, specific Oracle, specific MySQL, etc...) to get work done.
-3) Triggers/sprocs with modelling...yeah...
-4) There are some pretty decent Data Modelling out there, so no need to solve a problem that already has several implementations -- but my above list really haven't found a solution yet!



Update: I'm looking at liquibase.org for SCM support, but not sure if I can squeeze in enough time to give them a good test run.

Monday, November 03, 2008

Eclipse on Fedora 9

If you are reading this post, more than likely you already know what I'm talking about.

Fedora 9 comes with their GCJ compiled version of 'Fedora Eclipse'. Nice idea, not well implemented. Trying to use Eclipse update sites do not work correctly, and/or want to use different eclipse distributions.

So, using the normal Eclipse.org distribution, or a custom distribution from various vendors, one would just download, untar/unzip, and run, right?

Wrong...you get the splash screen, loading modules, then a small grey box. Going to /.metadata/.log shows errors like this:

!ENTRY org.eclipse.ui.workbench 4 0 2008-10-14 15:35:07.364
!MESSAGE Widget disposed too early!
!STACK 0
java.lang.RuntimeException: Widget disposed too early!
at org.eclipse.ui.internal.WorkbenchPartReference$1.widgetDisposed(WorkbenchPartReference.java:171)
at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:117)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)

.....
org.eclipse.swt.SWTError: XPCOM error -2147467262
at org.eclipse.swt.browser.Mozilla.error(Mozilla.java:1638)
at org.eclipse.swt.browser.Mozilla.setText(Mozilla.java:1861)
at org.eclipse.swt.browser.Browser.setText(Browser.java:737)
at org.eclipse.ui.internal.intro.impl.presentations.BrowserIntroPartImplementation.generateContentForPage(BrowserIntroPartImplementation.java:252)
at org.eclipse.ui.internal.intro.impl.presentations.BrowserIntroPartImplementation.dynamicStandbyStateChanged(BrowserIntroPartImplementation.java:451)
at org.eclipse.ui.internal.intro.impl.presentations.BrowserIntroPartImplementation.doStandbyStateChanged(BrowserIntroPartImplementation.java:658)


The fix is simple...once you know what to fix. The last part, the Mozilla error, was the key. Not intuitive, but this is what you do to get an external Eclipse distribution to work on Fedora 9 with Sun or OpenJDK (not GCJ):

yum upgrade firefox

Yup, that's it. Just upgrade your Firefox install.

Monday, July 07, 2008

Freemarker CamelCase to underscore

Quick blog -- as always, the last 20% usually takes up 80% of the time.

This time, it was trying to simply convert Camel Case into equivalent underscore Enum values.

Ok, not that 'simply', but still -- I'm using Hibernate Tools to reverse engineer from JDBC some JPA entities, and that part is working fine. Now, UI and some processes prefer to use a Model that is on top of the entity/dto. So, I thought I would be nice and auto-generate the Model's that some other programmers swear by to make their job easier.

Hibernate Tools just went to FreeMarker, which I was excited for, and I wrote most of the .ftl up for my Model. Until I hit Camel Case.

You see, what they are trying to do is create an ENUM version of each field; I'm not going into detail why, but simply that code-generation wise --

fieldOne -> FIELD_ONE
myReallyLongComboField -> MY_REALLY_LONG_COMBO_FIELD

After a lot of messing around in Freemarker and regular expressions, finally got the solution in two lines in the .ftl file (very important the < /#macro> is where it is now):

<#macro toUnderScore camelCase>
${camelCase?replace("[A-Z]", "_$0", 'r')?upper_case}< /#macro>


Then, make calls like:

<@toUnderScore camelCase=property.name/>

Perfect!


I've had quite a bit of experience in the past with Velocity, and put in some work in code-gen tools like Middlegen (now defunct). Once you have a process/template for commonly used code pieces, code generation really helps enforce consistency and good practice.

Wednesday, June 20, 2007

Carnal Knowledge API

Quite the title, eh?

This post is about API, services, or interfaces that are obscure and require 'internal' knowledge to use successfully. What do I mean?

Object result = doIt(object1);

There are two specific scenarios that I think about for obscure AI/services:
*Carnal Modification
*Carnal Returns

Carnal Modification
This happens only in API's where the language allows passing of references and the objects passed are non-immutable.

System.out.println(bean1.getValue()); //prints "default"
void modifyJavabeanValue(bean1);
System.out.println(bean1.getValue()); //prints "modified"

By simply calling a method, the objects you passed to it have changed. This may not be an expected result and you have to know that is the intent of the API...i.e., you have to have carnal knowledge about it. And, do not be fooled if it has a return-type, it can still modify the reference!

Carnal Returns
Carnal returns requires significant pre-knowledge on how to handle the return.

Object o = getMyStuff();

In the above example, you have no idea what is supposed to be returned, and even worse, it may return one of, say, five different types of objects that do not have common interfaces. Although you can check/reflect (pending language) what the actual object-type is supposed to be. Horrible!!!

String result = changeThis(String rawdata);

This example is almost as bad - the returned String content may be something unexpected: i.e., could be XML, could be comma-delimited string, could be raw java/perl/php code that you are expected to run. This can be allieviated easily with documentation AND specifying in the method signature the expected result:

String result = changeThisToXML(String rawdata); //returns XML

Awareness
Just trying to share some awareness that just because you found a neat/cool way to pull something off, other people (or you using someone elses) may run into obscure or unexpected results related to Carnal Knowledge requirements. There are indeed times when you can only do it a certain way, just rememer to document and modify your method signatures to make it as clear as possible -- you never know, 5 years later you might have to use your own API/Service!


NEW: I recently learned that, surprisingly, there is functionality when writing Stored Procedures to *change* the fields in the resultset based on parameters passed in...and that people do this!! Exact same problem.

Wednesday, November 08, 2006

Solve Business Problems Now, Technical Later

In working with a combination of C-level management, open source projects and developers, commercial projects and their support staff, and my own team, came to a pretty good resolution that clearly defines where I stand on certain topics:

  • Solve the Business Problem now, if there are technical issues that will take a while to resolve, solve them later.

Obviously, there is certain conditions that should be met, such as solving the business problem with some architecture/engineering to help with maintaince issues, changes, etc - but the point is to not over-engineer, and not to wait for a technical 'fix' unless it is clearly within the timeframe to solve a business problem or would impact the quality of the business solution to solve said business problem.

Cost: It *will cost more* to solve the business problem now and take care of technical issues later. This is where most management and customers do not really want to hear and, frankly, usually do not care as long as the business problem gets solved.

Revenue Stream: However, as a technical representative to an organization, and more importantly as an employee or consultant, the sooner you can bring in and/or maintain the revenue stream, the better overall for the organization. This may or may not cover the additional cost associated with the above statement, but quicker time-to-market is usually a good thing as long as the quality *of solving the business problem* is not compromised.

Why this rant? It's not a rant, it's been a thorn for a lot of individuals and teams. Some people are very good at the so-called 'quick and dirty' solutions that get something up and running, then spend *enourmous resources* maintaining that solution. Other over-engineered solutions may miss deadlines/over-budget but once deployed *may* cost much less over time (TCO) compared to an equivalent quick-and-dirty solution.

There is no perfect answer, other than no matter what a technical project will have costs during development and costs for maintanence -- but it has no value unless it solves a business problem.