Partner, helper, or boss? We asked ChatGPT to design a robot and this happened

  • Published
  • Posted in Tech News
  • 6 mins read
Tomato-picking robot outside while person inside doing work.

European researchers working on the design of a tomato-picking robot.

Adrien Buttier/EPFL

With alarm bells ringing about artificial intelligence (AI) pushing us toward extinction, you might well imagine the process of an AI designing a robot as something akin to Frankenstein creating the Terminator — or even the other way around!

But what if, at some point in the future, dystopian or otherwise, we need to collaborate with machines on solving problems? How would that collaboration work? Who would be bossy and who would be submissive?

Also: How to write better ChatGPT prompts

Having ingested many episodes of the dystopian Netflix series Dark Mirror, along with a side order of Arthur C. Clarke’s “2001: A Space Odyssey“, I’d bet the farm on the machine being bossy. 

However, an actual experiment of this sort conducted by European researchers turned up some distinctive results that could have a major impact on machine-human collaboration.

Assistant Professor Cosimo Della Santina and PhD student Francesco Stella, both from TU Delft, and Josie Hughes from Swiss technical university EPFL, conducted an experiment to design a robot in partnership with ChatGPT that solved a major societal problem.

“We wanted ChatGPT to design not just a robot, but one that is actually useful,” said Della Santina in a paper published in Nature Machine Intelligence.

And so began a series of question-and-answer sessions between the researchers and the bot to try and figure out what the two groups could architect together. 

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

Large language models (LLMs) like ChatGPT are absolute beasts when it comes to their ability to churn through and process huge amounts of text and data, and can spit out coherent answers at blazing speed.

The fact that ChatGPT can do this with technically complex information makes it even more impressive — and a veritable boon for anyone seeking a super-charged research assistant.

Working with machines

When ChatGPT was asked by the European researchers to identify some of the challenges confronting human society, the AI pointed to the issue of securing a stable food supply in the future. 

A back-and-forth conversation between the researchers and the bot ensued, until ChatGPT picked tomatoes as a crop that robots could grow and harvest — and, in doing so, make a significant positive impact on society.

Hand with tomato-picking robot next to tomatoes

ChatGPT came up with useful suggestions on how to design the gripper, so it could handle delicate objects like tomatoes.

Adrien Buttier/EPFL

This is one area where the AI partner was able to add real value — by making suggestions in areas, such as agriculture, where its human counterparts did not have real experience. Picking a crop that would have the most economic value for automation would have otherwise necessitated time-consuming research by the scientists.

“Even though Chat-GPT is a language model and its code generation is text-based, it provided significant insights and intuition for physical design, and showed great potential as a sounding board to stimulate human creativity,” said EPFL’s Hughes.

Also: These are my 5 favorite AI tools for work

Humans were then responsible for selecting the most interesting and suitable directions to pursue their goals — based on options provided by ChatGPT.

Intelligent Design

Figuring out a way to harvest tomatoes is where ChatGPT truly shone. Tomatoes and similarly delicate fruits — yes, the tomato is a fruit, not a vegetable — pose the greatest challenge when it comes to harvesting.

AI gripper next to tomatoes

The AI-designed gripper at work.

Adrien Buttier/EPFL

When asked about how humans could harvest tomatoes without damaging them, the bot did not disappoint, and generated some original and useful solutions.

Realizing that any parts coming into contact with the tomatoes would have to be soft and flexible, ChatGPT suggested silicone or rubber as material options. ChatGPT also pointed to CAD software, molds, and 3D printers as ways to construct these soft hands, and it suggested a claw or a scoop shape as design options.

Also7 ways you didn’t know you can use Bing Chat and other AI chatbots

The result was impressive. This AI-human collaboration successfully architected and built a working robot that was able to dexterously pick tomatoes, which is no easy feat, considering how easily they are bruised.

The perils of partnership

This unique collaboration also introduced many complex issues that will become increasingly salient to a human-machine design partnership. 

A partnership with ChatGPT offers a truly interdisciplinary approach to problem-solving. Yet, depending on how the partnership is structured, you could have differing outcomes, each with substantial implications.

For example, LLMs could furnish all the details needed for a particular robot design while the human simply acts as the implementer. In this approach, the AI becomes the inventor and allows the non-specialist layman to engage in robotic design.

Also: How to use ChatGPT

This relationship is similar to the experience the researchers had with the tomato-picking robot. While they were stunned by the success of the collaboration, they noticed that the machine was doing a lot of the creative work. “We did find that our role as engineers shifted towards performing more technical tasks,” said Stella.

It’s also worth considering that this lack of control by humans is where dangers lurk. “In our study, Chat-GPT identified tomatoes as the crop ‘most worth’ pursuing for a robotic harvester,” said EPLF’s Hughes. 

“However, this may be biased towards crops that are more covered in literature, as opposed to those where there is truly a real need. When decisions are made outside the scope of knowledge of the engineer, this can lead to significant ethical, engineering, or factual errors.”

Also: AI safety and bias: Untangling the complex chain of AI training

And this concern, in a nutshell, is one of the grave perils of using LLMs. Their seemingly miraculous answers to questions are only possible because they’ve been fed a certain type of content and then asked to regurgitate parts of it, much like a classical style of education that many societies still rely on today.

Answers will essentially reflect the bias — both good or bad — of the people who have designed the system and the data it has been fed. This bias means that the historical marginalization of segments of society, such as women and people of color, is often replicated in LLMs.

And then there’s the pesky problem of hallucinations in LLMs such as ChatGPT, where the AI simply makes things up when confronted with questions to which it does not have easy answers.

There’s also the increasingly thorny problem of proprietary information being used without permission, as several lawsuits filed against Open AI have begun to expose.

AlsoChatGPT vs. Bing Chat: Which AI chatbot should you use?

Nevertheless, an even-handed approach — where the LLMs play more of a supporting role — can be enriching and productive, allowing for vital interdisciplinary connections to be forged that could not have been fostered without the bot.

That said, you will have to engage with AIs in the same fashion you do with your children: assiduously double-check all information related to homework and screen time, and especially so when they sound glib.

News Article Courtesy Of »