HTTP server using Lua / Torch7

I'm starting to learn Torch7 to get into the machine / deep learning field, and I find it fascinating (and very difficult haha). My main problem, however, is that I can turn this training into an application - basically, can I turn my Torch7 Lua scripts into a server that the application can use to perform machine learning calculations? And if possible, how?

thanks

+5
source share
5 answers

You should look at the torch as a library (although you can access it as a standalone executable file). This library can be used from some Lua code available through HTTP. Lua code can run inside OpenResty , which will take care of all HTTP interactions, and you will get the same performance as OpenResty. for using LuaJIT.

Another option is to use HTTP processing based on the luasocket and copas libraries (for example, Xavante ) or use one of the options listed on the LuaWebserver page.

+2
source

You can use waffle . Here is an example greeting on this page:

local app = require('waffle') app.get('/', function(req, res) res.send('Hello World!') end) app.listen() 

lets say that your algorithm is a simple face detector. The input is an image, and the output is face detection in some json format. You can do the following:

 local app = require('waffle') require 'graphicsmagick' require 'MyAlgorithm' app.post('/', function(req, res) local img, detections, outputJson img = req.form.image_file:toImage() detections = MyAlgorithm.detect(img:double()) outputJson = {} if (detections ~= nil) then outputJson.faceInPicture = true outputJson.faceDetections = detections else outputJson.faceInPicture = false outputJson.faceDetections = nil end res.json(outputJson) end) app.listen() 

Thus, your algorithm can be used as an independent service.

+5
source

You can also use the async package that we tested with the torch.

+3
source

Both async and waffle are great options. Another option is to use ZeroMQ + Protocol Buffers . Whatever your preferred web development environment, you can send requests to the torch using ZeroMQ asynchronously, possibly serialize messages with protocol buffers, and then process each request in the torch and return the result back.

Thus, I was able to get a much higher throughput than the 20K waffle test.

+1
source

Try llserver, a minimalistic Lua server. Works as a single coroutine, serves dynamic content through a callback function: https://github.com/ncp1402/llserver You can perform other tasks / calculations in additional coroutines.

0
source

Source: https://habr.com/ru/post/1216484/


All Articles