As already mentioned, JPA <> EJB, they are not even related. EJB 3 uses leverage from the JPA, but more on that. We have a bunch of things using JPA that isn't even suitable for running EJB.
Your problem is not in technology, but in its design.
Or, I must say, your design is not easy to fit into almost any modern structure.
In particular, you are trying to save transactions across multiple HTTP requests.
Naturally, most of the common idioms are that each request in itself is one or more transactions, and not every request that is part of a larger transaction.
There is also an obvious confusion when you used the terms “stateless” and “transaction” in the same discussion, because transactions are inherently restrained.
Your big problem is simple manual transaction management.
If you are making a transaction on multiple HTTP requests and these HTTP requests are executed "very fast" immediately after each other, then you really should not have a real problem, except that you must ensure that your HTTP requests use the same database connection to use the database transaction mechanism.
Thus, you get a connection to the database, fill it in a session and make sure that throughout your transaction all your HTTP requests pass not only in the same session, but also in such a way that the actual connection is still valid. In particular, I do not believe that there is a JDBC connection on the shelf that will actually survive a failure or load balancing from one machine to another.
So, simply, if you want to use database transactions, you need to make sure that you are using the same database connection.
Now, if your long transaction has "user interactions" inside it, that is, you start the transaction from the database and expect the user to "do something", then, quite simply, this design is wrong. You DO NOT want to do this, as long-running transactions, especially in interactive environments, are just plain bad. Like "Thread Crossing" Bad. Do not do this. Batch transactions are different, but interactive long-lived transactions are bad.
You want your online transactions to be as short as possible.
Now, if you can’t guarantee that you can use the same database connection for your transaction, then, congratulating, you can implement your own transactions. This means that you can design your systems and data streams as if you did not have transactional capabilities on the back panel.
This essentially means that you will need to create your own mechanism for “fixing” your data.
A good way to do this is to gradually build up your data in a single transaction document, and then load that document into a “saving” procedure that does most of the real work. For example, you can save a row in the database and mark it as "unsaved." You do this with all your lines, and finally, call a procedure that looks at all the data you just saved and marks everything as “saved” in one transactional batch process.
Meanwhile, all your other SQLs are ignoring data that is not “saved”. Throw a few time stamps and clear the header process (if you really want to worry - it’s quite possible that it’s actually cheaper to just leave dead rows in the database, depending on the volume), these dead “unsaved” rows, since they are “unauthorized” transactions.
This is not as bad as it seems. If you really need a stateless environment, which seems to me, then you will need to do something like this.
Of course, in all of this, persistence technology really has nothing to do with it. The problem is how you use your transactions, not so many technologies.