AI in the Wild: A ChatGPT Simulated Town 🤖🔥
Victor Bueno
Posted on January 8, 2024
Have you heard about a village where all its citizens are controlled by artificial intelligence? It sounds like a great movie plot, but it's not fiction—it's a study done a few months ago that I just discovered and felt compelled to share in this article.
The study
Named "Generative Agents: Interactive Simulacra of Human Behavior", this study, led by Google and Stanford researchers, involved 25 ChatGPT agents playing a game where each one assumed the role of a character in a simulated town. The agents were provided with simple descriptions, including their occupation and relationships with other agents, essentially receiving a snapshot of their past memories and motivations.
Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.
A memory system also was implemented, so all the experiences "lived" by the agents were stored in their own memory. For instance, when an agent named Eddy was telling his father, John, about his homework, John stored that information so later when the kid was off to school and his wife asked about their son, John knew that Eddy was excited with the music that he was composing as a homework and was able to share that.
What happened
Some interesting things happened during the study, such as:
- Each agent created their own routine without specific instructions given, just based on their histories, including daily activities as brushing teeth, taking shower and catching up with other agents.
- One of the agents was given the intent, in their history, to throw a Valentine's Day party. After proposing the event to other agents, they autonomously spread invitations to the party and even the ones that weren't invited by the host got to know about the event trough others, mirroring real-life information flow in a city;
- At the Valentine's Day party, guests and the host coordinated to schedule their arrival, showcasing a level of collaboration akin to real-world social dynamics.
- Another example of information diffusion was the fact that a mayoral election was happening and agents that weren't aware of that got information from other agents and were even capable of discussing about the election later.
- The agents were reflecting about themselves and what matters to them. One for instance, from his own observations, was able to "understand" how dedicated he is to research activities.
AIs more human than... Humans?
A controlled evaluation tested whether ChatGPT agents could exhibit believable human behavior based on their environment and experiences.
As the agents responds to natural language question they were basically "interviewed" to assess five human abilities: maintaining self-knowledge, retrieving memory, generating plans, reacting, and reflecting.
A group of 25 humans watched full replays of the simulation, each following one agent while inspecting the corresponding agent memory data. These individuals were then interviewed, roleplaying as the selected agent. In the final evaluation, other human assessors were invited to rank the scenarios by believability. The result was striking—ChatGPT proved more convincing at roleplaying a human than an actual human, perhaps not surprising at this point.
Glitch in the Matrix
ChatGPT is amazing, we already know that, but the agents presented some behaviors that weren't expected by the researchers:
- Some agents failed to absorb information adequately, occasionally retrieving incomplete memory fragments during interviews;
- The stores at the simulated town closes at 5 p.m. but a few agents enter the stores after closing hours, misunderstanding the shop's operating hours.
- Even do the Bar was supposed to be a place to go later on the day, some agents during the simulation preferred to have lunch there instead of the café.
My thoughts
This kind of study are really interesting to me, I don't work directly with AI or machine learning but it's a field that makes me reflect a lot.
And honestly, this is impressive, looking how human an artificial intelligence can get. Even the presented "errors" from the agents sounds a lot like a human to me... Who doesn't forget information that they should know? Who doesn't go to a store just to find out it's closed? And who has never grab a beer earlier than they should?
This makes me think, how far artificial intelligence will go? Perhaps in the near future, video games will feature NPCs (non-playable characters) entirely controlled by AIs, free from predetermined scripts, offering an immersive experience beyond our current imagination.
I would love to hear your thoughts on the matter, what implications do you think this study could lead to?
If you are interested in delving deeper into the research paper, click here to access the official GitHub, where you can find the complete study and instructions to replicate the simulation.
Posted on January 8, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.