Monday, August 23, 2010

openEJB unit testing for jboss deploys

Some notes using mavenized /src/main/resources/META-INF/openejb-jar.xml:

<!-- make backward compatible with jboss style deployments. For EAR deploys prefix the .format = EARname/{deploymentId} -->
openejb.deploymentId.format = {ejbName}
openejb.jndiname.format = {deploymentId}/{interfaceType.annotationNameLC}

Monday, August 09, 2010

Eclipse JPA tooling, Hibernate (jboss) tooling

Working on ways to improve the tooling/work environment when in a JPA project.

In the past, pretty much hand-code everything and rely on maven/unit-tests to catch errors.

Quicknote experiences:

* To get JPA Tooling working, need to map the jdbc driver manually/directly to the filesystem jar location through the Eclipse->DataManagement features.

* More on JPA tooling, particularly with maven layout, here:

* To get Hibernate Tooling working, need to add the jdbc driver to the classpath, EVEN IF you are using Database Connection:JPA project configured option (i.e. see above direct jar filesystem mapping does not carry over to Hibernate Tooling).

* In the persistence.xml, to avoid dealing with a lot of issues, remove JTA requirements. This works for me as the Entity class/domain are in a project seperate from the Session Bean (the Entity Managers), so the Entity class/domain has a non-JTA persistence.xml, while the Session Bean (entity manager) project has a JTA persistence.xml. I hate inconsistencies, but only way this seems to work.


* In JPA tooling, immediately checked the model to the database structure, and identified a couple of case-sensitivity issues between the field name and the column name that were easy to fix.
* In Hibernate tooling, can test-run jpa-ql queries to see if they work as expected, timing, and review results. Can also look at the Dynamic SQL Preview to see the actual sql used for future index optimizations.

Monday, July 26, 2010

(CI) Building Eclipse PDE plugins from Maven a pain in the arse.

After evaluating maven-pde-plugin, which one would think would make it easy, turns out not so much.

I've swapped over to using Tycho (because it appears to better support multiple build options, like update sites and RCP apps directly instead of just plugins and features), but that isn't proving trivial even in the most basic sense still.

But, using Tycho 0.9.0 from the ibiblio org.sonatype.tycho groupid (not to be confused with org.codehaus.tycho...or several other groupId's I've run into) you still have issues:

Errors like: "Cannot find lifecycle mapping for packing: 'eclipse-plugin' come up a lot. Looking at the off-chance there is a dependency issue, you are required to use an unstable release version of Maven 3 (as of 7/26/2010 at any rate). Using maven 3.0-beta-1 you now get "Unknown packaging: eclipse-plugin" not much help there either.

Searching for help on either of these issues get posts like 'fixed in Tycho 0.5.0', or 'you need to modify how you build from source'...which if you get the binary from a public maven repository one would hope would work as expected (per why most people want to use maven so you DONT run into these issues).

Other people mention 'update m2eclipse'...except I'm running this from the command line for the purpose of eventually moving to Hudson/Continuous Integration. Maybe I mis-understand the purpose of this maven plugin and it must be used in eclipse with m2e?

Please help if you read this!

EDIT: reason for chasing down why I want to automate Eclipse PDE builds is
1) I have an RCP app I would like to migrate over (from Eclipse 3.0 unfortunately)
2) primary reason was to pre-load company JDBC drivers for use in Eclipse (

ANSWER: do not assume the 'convention':

WRONG plugin artifactId: maven-tycho-plugin

CORRECT plugin artifactId: tycho-maven-plugin

Wednesday, July 21, 2010

Web Browser plugins, how I loathe thee, let me count the ways.....

I have had a passionate dislike for web browser plugins. Yes, they add new exciting features...that you may or may not be able to control, or have a predictable behavior across the world-wide-web.

Take for example two very common plugins that I usually have to deal with for reporting, document management/archive, etc.

PDF plugins (Adobe)
TIFF plugins (variety)

Adobe PDF plugins -
  • Versions/upgrades regularly, users have to regular update 'the site', even though it's not the site, it is the plugin asking for upgrades.
  • To embed, not embed, dealing with pop-ups allowed.
  • is a good one....the web-embed adobe plugin making *multiple* http requests for the same content, and if your logging didn't account for that - multiple logs (see http 206, byte serving/byte range requests).
TIFF plugins -
  • Variety of plugins with different options/features/control (and even something 'simple' like if the plugin allows multiple page viewing....apparently not standard?!)
  • You, your client/customer, or someone, has Outlook/Office installed and it has an update, a critical update, a security update, whatever -- and reverts to using the MS Tiff viewer by default despite your best effort to use a different TIFF plugin.
  • TIFF encoding/compression formats (i.e. g3/fax compression that has an X-Y ratio difference, that some plugins understand and show 'correctly', and others that show without the appropriate ratio and have 'crunched' images).
  • And the occasional TIFF that has a byte that isn't understood by Plugin XYZ, or other plugin, but yes on this for it, they happen.
Then you add in other plugins like flash/shockwave, java applets, activex/silverlight, codec/encoding video players (whether to use quicktime, realplayer, windows media player, divx, ......), and developers just can not wait until HTML 5 becomes a real-world/real-usage deal.

Friday, April 23, 2010

BigDecimal v Float/float or Double/double for java transport

As I have posted previously, quite often I get involved in some type of financial portion of a solution, or the entirety of the solution is financial.

In java, BigDecimal is where you go for computational accuracy -- but what about if you just need to transport the data?

So, I reviewed information in the Sun/Oracle JDK site, and if you go search and read it, it isn't overly definitive (from a 'do I want to or not use') on float/doubles.

After going through many other posts, mailing list searches, and reviews, I broke down and posted a question here:

I also started doing some manual tests myself, and finally got the 'answer' I was looking for:

float: 9 'locations'
double: 15 'locations'

What are locations? My testing, I found that float can accurately store and retrieve 6 numbers before the decimal, and 3 after....or 3 before/6 after, or any variation of that theme. Similar for double - 9 before/6 after, and other variations.

Needless to say, that's why it is vague as it matters what scale you are storing after the decimal as to how much you can store before the decimal.

So, unless you can get a definitive max value and precision rule for a financial application, you might want to stick with the heavyweight of BigDecimal just to be sure.


Edit: I forgot to post *why* I was even looking at this!!

We were having some memory issues with an outsourced application (that lacked pagination), that had a DTO with 12 monetary value field...12 BigDecimals per DTO. The List sizes ranged from 300->2000->40k. The 40k (most extreme) was taking up 45MB of memory! Changing the BigDecimal to float primitive for the 12 fields dropped the same List size down to 15MB (1/3!!!!!).

However, the accuracy needed for this application was not satisfied by float, so although I'm evaluating Double I may opt to play it safe and keep accuracy as more important than saving memory (and, instead, actually paginate the results!).

Thursday, January 14, 2010

Embedded DB - Sort Stability, Pagination

We use application-level pagination. I wont go into the reasons, but several of them are business reasons.

What is application-level pagination? Someone wants to view 50000 records through a web screen (just stay with reasons).
  • Make the query, default/starting sorting order.
  • Cache results locally on the application layer the current set (in our case, cache into a hypersonic, h2, derby database that writes to file as too much to fit in memory).
  • Return first results back to the web screen (say 50 records per page).
--person goes to 'next page', get next 50 records from local db result set.

--person re-sorts the existing resultset, re-sort from local db result set (instead of re-querying the origin db), return first .

Problems we ran into:

Certain embedded databases we found to not work out well for this challenge. Hypersonic and H2 both didn't see to handle (at least with default settings) the multi-user/asychronous(web/ajax) request nature of the sorts and were causing the result sets to not be accurate when 'pushed too hard' (a user requests a sort, then changes their mind in the middle of a sort and changes the sort again).

Derby however did seem to resolve this issue for us. Yes, there are different ways to handle pagination, however need to solve the business request of how the behaviour was expected to act.

If someone has some similar experiences with application-level caching of large result sets, re-orders, pagination, please share!