Even simpler Java / SQL data transfer needed

So, I am using jdbc to communicate with MySQL database. For many tables and for many queries / views, I created one class that encapsulates a single row of the table or the result of a query / table. Access to the database returns one object of such a class (when I know for sure that there is only one corresponding row) or the Vector of such objects.

Each class has a factory method that builds an object from a ResultSet string. Many ResultSet.getXXX () methods are required, as well as accurate accounting reporting on what value is in the column, especially after changes to the table / query / view layout.

Creating and maintaining these facilities is a boring, time-consuming and stunning task. In other words, the task performed by the tool. It should read SQL (a variant of MySQL, alas) and generate Java code. Or at least give me a view (XML? DOM?) Of a table / query / view allowing me to generate Java code myself.

Can you name this tool?

+4
source share
5 answers

If you are looking for a simple framework to help you write sql, I would recommend ibatis sql maps . This structure basically does exactly what you want.

Hibernate is also a good option, but it seems a little oversized for a simple problem like yours.

You can also see the spring framework . This is done to create a simple environment for writing a java application and has a very useful sql abstraction. But be careful with spring, you might like the framework and spend too many happy hours on it 8)

Regarding your concern for reflection. Java no longer has serious performance overhead issues (at least since version 1.4 and with O / R mapping tools).

In my experience, it’s better to take care of well-written and easy-to-understand code than to take care of some performance overhead, it can be worth it, it is only theoretically.

In most cases, performance problems will not appear when you expect them, and they can only be identified using the measurement tools used in your code after writing it. The most common performance problems are related to I / O or based on some error in your own code (i.e. Bulk creating new instances of classes or loops with millions of runs that are not needed ...), and not in jdk itself.

+2
source

I'm a little confused about your questions. Why don't you use an object relational mapping infrastructure such as Hibernate?

I had the same problem with reading and writing a lot of SQL. In the end, I started writing new projects with Hibernate and did not look back. The system takes care that I build the actual tables and run SQL in the background, and I can mainly work with java objects.

+5
source

I created a mini-framework similar to what it was years ago, but it was for prototypes, not for production.

The idea is the following, and it is very simple to do. Compromise is the cost of using reflection. Although Hibernate and other ORM tools pay this cost.

The idea is very simple.

  • You have a Dao class in which the request is executed.

  • Read the ResultSet metadata and there you can capture the table name, fields, types, etc.

  • Find a class in the class path that matches the table name and / or will have the same number / field types.

  • Set values ​​using reflection.

  • Return this object and drop it on the other side, and you're done.

It may seem absurd to find a class at runtime. And it may look too risky because the query may change or the table structure may change. But think about it. When this happens, you should update your mappings anyway to match the new structure. So instead, you just update the appropriate class and are happy with it.

I don't know how ORM tools work to reduce the cost of reflection calls (because mapping is the only thing it does is help you find the right class). In my version, searching from about 30,000 classes (I added jars from other places to check it out) took only 0.30 ms or something like that. I saved this class in the cache, and the second time I did not need to search.

If you are interested (and still reading), I will try to find the library on my old PC.

At the end, my code was something like this:

Employee e = ( Employee ) MagicDataSource.find( "select * from employee where id = 1 "); 

or

 Employee[] emps = ( Employee[] ) MagicDataSource.findAll("select * from employee "); 

Inside it looked like:

 Object[] findAll( String query ) { ResultSet rs = getConnection().prepareStatemet( query ).executeQuery(); ResultSetMetaData md = rs.getMetadata(); String tableName = md.getTableName(); String clazz = findClass( toCamelCase( tableName ) ); // search in a list where all the class names where loaded. Class.forName( clazz ); while( rs.next() ) { for each attribute etc. etc. setter... end result.append( object ); } return result.toArray(); } 

If anyone knows how ORM tools manage reflection costs, please let me know. The code I read from open source projects makes no attempt to do anything with this.

In the end, it allows me to create quick little programs for system monitoring or similar things. I no longer do this work, and this library is now forgotten.

0
source

Besides ORM ...

If you use the rs.getString and rs.getInt routines, you can, of course, lighten your maintenance load by relying on named columns rather than column numbers.

In particular, rs.getInt ("id"), and not rs.getInt (1), for example.

It’s rare that I had the actual data type of the column change, so further SQL maintenance is nothing more than adding new columns that were made to the table, and you can just bind them to the end of your monster, link the list in each of you small objects DAO.

Then you then accept this idiom of using column names and extend it to a plan for using consistent names and, at the same time, “unique” names. The goal is that each column in your database has a unique name associated with it. Theoretically, it could be as simple (albeit verbose) as tablename_columnname, so if you have a table called "member", the column name is "member_id" for the id column.

What do you buy?

He buys you the opportunity to use your common DAOs on any “valid” result set.

A “valid” result set is a result set with columns using a unique specification of names.

So you get "select id member_id, name member_name from member, where id = 1".

Why do you need this? Why bother?

Because then your associations become trivial.

 PreparedStatement = con.prepareStatement("select m.id member_id, m.name member_name, p.id post_id, p.date post_date, p.subject post_subject from member m, post p where m.id = p.member_id and m.id = 123"); ResultSet rs = ps.executeQuery(); Member m = null; Post p = null; while(rs.next()) { if (m == null) { m = MemberDAO.createFromResultSet(rs); } p = PostDAO.createFromResultSet(rs); m.addPost(p); } 

Look, here the binding logic does not care about the contents of the result set, since it is only interested in the columns that it cares about.

In your DAOs, you make them a little smart in ResultSet. It turns out that if you do "rs.getInt (" member_id "), and member_id is not actually executed BE in the result set, you will get a SQLException.

But with a little work, using ResultSetMetaData, you can perform a quick preliminary check (selecting all the column names in front), and instead of calling “rs.getInt” you can call “baseDAO.getInt”, which processes this data for you, so as not to get an exception .

The beauty is that once you do this, you can easily get incomplete DAOs.

 PreparedStatement = con.prepareStatement("select m.id member_id from member m where m.id = 123"); ResultSet rs = ps.executeQuery(); Member m = null; if (rs.next()) { m = MemberDAO.createFromResultSet(rs); } 

Finally, this is a really (really) trivial bit of scripting (using, say, AWK) that can take bean properties and convert it to the correct code binding code for the source DAO. A similar script can easily take an SQL table statement and convert it to a Java bean (at least the basic elements), which your IDE then converts to a flurry of getters / seters.

By centralizing the binding code in the DAO, the service actually happens almost nothing, since it has changed in one place. Using partial bindings, you can mercilessly abuse them.

 PreparedStatement = con.prepareStatement("select m.name member_name, max(p.date) post_date from member m, post p where post.member_id = m.id and m.id = 123"); ResultSet rs = ps.executeQuery(); Member m = null; Post p = null; if (rs.next()) { m = MemberDAO.createFromResultSet(rs); p = MemberDAO.craateFromResultSet(rs); } System.out.println(m.getName() + " latest post was on " + p.getDate()); 

Your burden moving forward is mostly writing SQL, but even that isn’t terrible. There are not many differences between writing SQL and EQL. Reason, this is partly due to the need to write a select statement with the number of columns in milliseconds, since you cannot (and should not) use "select * from ..." (select * always (ALWAYS) leads to a problem, IME).

But this is only a reality. However, I found that (if you are not reporting), this problem simply does not happen much. This happens at least once for most tables, but it happens more than once and again and again. And, of course, after you do this once, you can “cut and paste” your path to glory or reorganize it (ie sql = "select" + MemberDAO.getAllColumns () + "," + PostDAO. getAllColumns () + "from member m, post p").

Now I like JPA and ORM, I find them useful, but I also find them PITA. There are certain love / hate relationships that occur there. And when everything goes smoothly, boy, it's smooth. But when it becomes stony, boy. Then it can get ugly. In general, however, I recommend them.

But if you are looking for a "light" non-structure, this technique is useful, practical, low overhead and gives you great control over your needs. There is simply no black magic or dark matter between your queries and your database, and when something doesn’t work, this is not some secret misunderstanding of the framework or case problems if someone has 100K lines of code, but rather, the error is your SQL - where does it belong.

0
source

Edit: Nevermind. When searching for a solution to my own problem, I forgot to check the date for this thing. I'm sorry. You can ignore the following.

@millermj - Do you do it for fun or because you need it there? Just curious, because it sounds just like Java IDEs like Eclipse and NetBeans already provide (using the Java Persistence API) the New-> JPA-> Entity Classes from Tables functions.

I missed the point, but if someone just needs classes that correspond to their tables, and they are saved, JPA plus some "magic" IDEs may be enough.

0
source

All Articles