Apple is developing a new method to create 3D models from regular 2D photos using artificial intelligence. According to a research paper published by Apple, this system can take multiple pictures of an object from different angles and then build a complete 3D version of it. The goal is to improve how digital objects are created, especially for apps like augmented reality (AR), 3D modeling, or even product design.
This method is different from traditional tools, which often need special equipment like depth sensors or LiDAR. Instead, Apple’s technique uses a mix of regular images and a smart AI system trained to guess how an object should look in 3D. It works by comparing different photos and building a 3D shape that fits them all. The researchers used something called “tri-plane features” to help AI understand the object’s depth, texture, and shape better.
Apple’s system performed well in tests, often doing better than other similar AI models. One big advantage is that it doesn’t need perfectly edited or aligned pictures — it can handle real-world, messy photo sets. This could make 3D creation much easier for everyday users and developers.
Although Apple hasn’t said when or if this technology will be added to its products, it shows the company is looking at new ways to bring more advanced AI tools into creative workflows. It could have a big impact on AR, design, and even how we shop online in the future.