The pace of innovation in AI image generation is extraordinary. One company – Luma Labs – provides an excellent example of a very practical and interesting use of the latest technology applied to 3D images.
Loma AI It is in beta testing on iPhone and will eventually be made available on Android as well. I joined the beta test group and can share some information about what this amazing app does and how easy it is to get amazing results.
Luma AI is an application and service developed by Luma Labs. It takes 3D images using a technology known as Neural Radiation Fields (NeRF). It is similar to ray tracing technology that makes the graphics in high-end games look very realistic.
NeRFs have been around for a few years now but have been primarily present in research facilities until very recently. with a blast AI image generation, themed in realistic Dall-E showsNRFs are beginning to be explored by a much wider audience. The first wave of the new NeRF software requires some developer skills and Install software packages from Github, then train the AI on a set of images. It was a little for the average person.
Luma Labs is about to make the process dramatically simpler with the Luma AI app. From start to finish, the entire process can be managed from your iPhone, and the end result can be accessed more easily.
Since Apple has been keen to demonstrate the 3D depth measurement capabilities of LiDAR sensors, you might expect Luma AI to cost more. iPhone 14 Pro or iPhone 14 Pro Max To capture 3D models. However, the clever developers at Luma Labs use photogrammetry instead. This makes this technology compatible with older iPhones like the iPhone 11.
In the future, the app will become available on Android and there is already a web version in beta testing as well. In an interview, Luma Labs CEO Amit Jain said that the iPhone app is expected to be ready for public release in a few weeks.
To use Luma AI, you can simply slowly rotate around an object at three different heights. The augmented reality overlay guides you through the process, which takes a few minutes and becomes easier after a few tries when you get familiar with the process. Before long, you’ll be able to pick up something medium sized like a chair in a few minutes.
Any sized object can be manipulated because, for Luma AI, it is just a series of images – regardless of the size of the subject. If you circle a cup, a statue, or a building, the general idea remains the same.
The app will notify you when it has enough images, and when that happens, a Finish button will appear. You can also keep spinning and Fill in the gaps in the AR cloud of rings and rectangles Which represent the photos taken so far. The app will automatically stop capturing when the perfect amount of photos have been collected. There is also a freeform mode that lets you take more photos at different angles and distances. You can watch the process in the YouTube video I created below. It’s an iPhone app, so it’s a portrait video.
Processing is the next step, which takes place on Luma Labs servers. After an hour or so, the final NeRF will be available in the app in several different forms. The first presentation presented is a video created, which shows the object in flight in its natural environment. The interactive version is next and lets you rotate the view by dragging a finger or mouse across the image.
What is even more impressive is that the capture theme, extracted from the background, is also available. With this representation, you can rotate the 3D object on any axis and zoom in to see it up close. Sharpness depends on the number of photos collected and how slow and steady you are during the capturing process.
Luma Labs is updating the app and service at a remarkable pace. Within a week of receiving the beta test invitation, two powerful new features were added that greatly expand the possibilities. The first is the web upload option that allows you to capture the video without the app, and then upload it to the Luma Labs website for processing. Results appear online and in the app.
This means that it is possible to use any of the iPhone’s camera modes, capture video with a dedicated camera, or even record video with augmented reality glasses like Ray-Ban Stories. For example, a drone video becomes even more epic when you can smoothen movement and change direction after you’ve already landed. Luma Labs shared a good example showing an aerial view of autumn leaves in this tweet.
– Luma AI (@LumaLabsAI) October 22 2022
Another new feature opens up opportunities for 3D editing, drawing and 3D printing. 3D meshes can be exported with textures in OBJ or GLTF format. It is not optimized but can be viewed with intact texture even with an online viewer like the free open source website Online3DViewer.
It is also possible to open 3D files in a network editor such as Free and Open Source Mashlab Deletes any stray artifacts that appear as floating points, as well as cleans up and simplifies the form before exporting in various formats. The statue shown above is about three inches tall and was sculpted by my wife Tracy for her work, AL LittleCharacter. Luma AI captured an impressive amount of detail in the statue and the log it was resting on. The record could have been identified and removed by MeshLab as well.
Kyle Brussell shared a candy display from a party, noting that he asked adults to wait for their presents so he could pick them up as digital dioramas.
used Tweet embed At a birthday party last night I made a group of adults not eat dessert so I could circle the table with my phone dreaming about 3D AI to set up like a very cool person pic.twitter.com/sP0vVPB3yx
– Kyle Russell (@kylebrussell) October 30, 2022
Although Luma AI can process video, it relies on still images to create a 3D scene. This means that if the subject moves, it may reduce the quality or clarity of the capture. A 3D image of a seated person would be fine, as shown by Albert Buzizan’s tweet. In the same tweet, the second capture of the sculpture shows what happens when there is movement within the scene. The background shows people who walked close to the subject as distorted figures.
Albert Bozesan October 30, 2022
Luma AI is currently in beta testing, and invitations are submitted periodically across the company Twitter the account. If you have a compatible iPhone and are interested in this technology, you may be able to get early access. There is also a waiting list on Loma Laboratories website.
Jain, CEO of Luma Labs, noted that pricing is yet to be determined and depends on how broad the user base is and how the results of the scans are used. Based on this data, there may be a professional subscription with more advanced features and a personal subscription at a lower price. For now, it will still be free to use.
#app #create #amazing #models #iPhone