Meta is investing billions in virtual reality. However, the metaverse is proving more difficult to create than initially assumed. You can create a few slick universes, but convincing many people to invest in them remains unrealistic because the costs of creation are so disproportionate to the benefits that could be derived.
So, the costs of creation must be reduced. A virtual universe must offer a complexity at least as great as the real universe. It must be possible to offer thousands of virtual universes for virtually any conceivable need.
A tremendous amount of data
Meta has dizzying amounts of contextualized images
our world as well as personal data on nearly one-third of the earth's inhabitants.
That's why it can use artificial intelligence to help furnish those worlds we'll want in interesting ways, with data that will match our likes and dislikes.
To that end, Meta is previewing its Make-A-Video, along the same lines as Google's Dall.E 2.
Videos from text
Even if the result seems rather primitive and naïve at the moment, we understand the potential. Artificial intelligences learn fast and especially remember. Already that from photos taken from several angles we can reconstruct 3D objects, this project proposes to create videos from phrases that we will want to communicate to him.
- "Unicorn flying over a mystical landscape."
- "Humans building a highway on Mars."
- "Grizzly confused in a math class."
The A.I. takes the qualified images it has and tries to assemble them into a proportionally coherent whole.
Licorn - fly - above - mystical landscape - humans - build - highway - Mars - grizzly - confusion - math class.
She knows that fly is a verb, Grizzly is an animal, confusion is an emotion, and a math class is a place to learn math concepts. Flying is done in the air, a Grizzly is proportionally large compared to a human, confusion is illustrated by a specific posture, etc. The challenge is to assemble the images into a whole that seems acceptable to us.
We are still a long way from a 3D reconstruction, but the time will come when we get there and, from that point on, the creation of virtual universes will be accelerated by several orders of magnitude.
Our reflect
An A.I. relies on our data and perceptions, with all their biases, and there can be many. How the data is collected, through the lens of our phones mainly and the accompanying comments on social networks, radically influences what it can offer. Where the phones don't go, the A.I. won't know.
There will always be space for original creations, but the critical phenomenon is that of feedback, as when a microphone is put in front of a speaker, and of «mise en abyme», as between two mirrors facing each other, which then reflect their own image.
An A.I. is inspired by our images and proposes them to us, we return other images to it from what it has proposed to us or whose influence has imposed itself, in an iteration of less and less originality and more and more excess. If the image of the earth as seen from space has imposed itself on the consciousness of the entire human population, what about the influence of millions of videos and images produced by an A.I.?
The taste of the real may be inimitable, but it can apparently be diluted in the virtual.
References
Make-A-Video - https://makeavideo.studio/
Dall.E 2 - https://openai.com/dall-e-2/
What an Artificial Intelligence Understands About Our World - Denys Lamontagne - Thot Cursus
https://cursus.edu/fr/24688/ce-quune-intelligence-artificielle-comprend-de-notre-monde
Learn more about this
Technology
See more technologies from this institution