Save ARKit cloud point data and retrieve for display

I hope to keep the cloud cloud data recorded using ARKit and Unity so that it can be retrieved and displayed since it was originally recorded. Let's say that I'm just showing cloud cloud data exactly as the UnityPointCloudExample.cs script that is included in the Unity plugin does. However, when a point is detected and displayed, I also save some relevant information about this point. Then I completely close the application. When I open the application again, I want to be able to reload the data in the same positions relative to the camera, as it was at the initial detection, is this possible using ARKit and the Unity as-is plugin?

I know that this will require storing some information about the position of the camera relative to the position of the point, and then when you restart the application, you will need to make some kind of translation between the new position of the camera at restart and its position from the previous session in which the points were recorded, and then using this information to place the points in the correct position. Looking through the ARKit documentation, I'm not quite sure how to do this using my own interface, and I'm even less sure how to achieve this using the Unity plugin. If someone could at least direct me to the elements of the Unity plugin or the native ARKit interface, which would most easily facilitate the implementation of the above functionality, I would greatly appreciate it.

Or, if it is outside the scope of the ARKit / Unity plugin in its current state, an explanation of how and why this is so would be equally useful. Thanks!

+7
c # augmented-reality unity3d arkit point-clouds
source share
1 answer

ARKit sets the origin to 0,0,0 when AR tracking starts first. It is not possible to reload the AR scene correctly in subsequent runs using the coordinates from the previous run without determining the relationship between the points of the previous run and the points of the new run.

To associate previous ARKit runs with a new run, we can use landmarks placed manually or detected using some kind of object recognition. Suppose we manually place landmarks for simplicity.

Here's a pipeline that will allow us to save and restore the ARKit scene between subsequent runs.

  • Initial scene setup procedure.

    • Start our ARKit app for the first launch, place objects or play a game.
    • Allow ARKit to initialize.
    • Select two control points along a flat horizontal plane in our environment. For example, if in a room we could choose two corners of a room. These items will be used to reload our ARKit scene.
  • Place objects in ARSpace as desired. When this is done, save the position of our AR objects and our two reference points in a file.

  • When rebooting, place the same two control points in the previously saved position. Given these two points, you can now reload assets in their previous places, getting their locations relative to the old points, and then placing them relative to the newly defined points.

To reduce the required user experience, we can expand it with image tracking / detection. When a signed image or object is detected, we automatically set its location in ARSpace as one of two points. When both landmarks are detected, we can "automatically" reload the scene, as described in step 3. This will eliminate the error of the initial placement point.

0
source share

All Articles