ChatGPT’s success could prompt a damaging swing to secrecy in AI, says AI pioneer Bengio

  • Published
  • Posted in Tech News
  • 5 mins read
The OpenAI logo is seen reflected in an eye

Photo by Jaap Arriens/NurPhoto via Getty Images

The most important artificial intelligence program in the world right now is from a company that, unlike many of its peers, doesn’t publish its source code. 

ChatGPT, created by OpenAI, is not open-sourced on GitHub like many other natural language programs before it. Nor were the sources of ChatGPT, OpenAI’s GPT programs, made readily available. 

Also: Want to experience GPT-4? Just use Bing Chat

And Tuesday, the company achieved something of a milestone, refusing even to disclose the technical details of the latest version, GPT-4

The lack or transparency of ChatGPT and GPT-4 is a break with the common practice in deep learning AI, where scholars both in academia and in enterprise have tended to publish aggressively, following the tradition of open-source software, where code is readily made available to anyone who wants it. 

Also: How to make ChatGPT provide sources and citations

The closed nature of ChatGPT could become much more the norm in AI and has ethical implications, warned AI pioneer Yoshua Bengio, the scientific director of Canada’s MILA Institute for AI, in a talk last week to reporters and members of industry.

“The academic way of thinking of researchers who used to be in academia and have moved to industry has changed the culture to bring more of that open source spirit sharing In general and collaborating,” said Bengio in a small gathering of press and executives on Zoom last week. 

Also: The best AI chatbots: ChatGPT and alternatives to try

“But the pressures of markets are probably going to push in a different direction,” said Bengio, “towards secrecy, which is bad for ethical reasons, and also bad for the advancement technological progress because it means information is going to take more time to reach more places if things are secret.”

Bengio, who is a full professor at the University of Montreal, and also co-directs the CIFAR Learning in Machines and Brains Program, was an invited speaker for an hour and half talk hosted by the Collective[i] Forecast, an online, interactive discussion series that is organized by Collective[i], which bills itself as “an AI platform designed to optimize B2B sales.”

Bengio was responding to audience member Laura Wilson, who asked Bengio, “Is it still possible to have an ethical framework” in AI research given the enormous commercial potential of ChatGPT, and Microsoft’s Bing search program, which is incorporating ChatGPT’s capabilities

Also: These experts are racing to protect AI from hackers

Academics, said Bengio, are “going to continue to do their open science and sharing their work, because that’s part of their model.” 

It’s less clear, he said, that industry research will stick to the open path.

Corporate research used to be “much more secretive” before the recent AI focus on open publishing, said Bengio. “Now, I think looking at the sort of gold rush that’s likely to happen” in AI, following ChatGPT’s success, “are we going to keep that culture in industry?” mused Bengio.

The primacy of papers is paramount, said Bengio, because AI progresses as a collective effort of cross-pollination between labs.

Also: 5 top AI art generators to try

“These are complicated systems,” said Bengio of so-called large language models such as GPT-4.

“We build our code on top of others’ code, and it’s also building on top of the ideas that are being written up and evaluated in scientific papers all around the world — we build on each other’s progress.”

“There are patents, but, really, the actual meat is something that is in those papers.”

Not only would secretive practices potentially set back industry research, said Bengio, but it can obfuscate harms to the public.

Also: How to use ChatGPT to build your resume

“If people move too quickly and break things, it could be bad,” said Bengio, “and it might even be a backlash for the whole industry.”

Small companies, observed Bengio, are generally more willing to take risks with untested software because “that’s the game of business.” He was alluding to programs such as ChatGPT that have in some cases produced results some users find “disturbing,” suggesting they are not fully developed.

“But now companies like Google and Microsoft and others feel compelled to jump into the race,” he said. “So, one possible concern I’ve heard is, are they going to be as careful of what they’re putting out there?”

Also: How does ChatGPT work?   

Bengio’s peer, Yann LeCun, who along with Bengio received the Turing Award in 2019 for their work on AI, has expressed similar concerns. 

In a tweet on February 17th, LeCun, who is chief scientist for AI at Meta, wrote,

FAIR [Facebook AI Research] played a key role in making the AI R&D scene open. Others followed. At least for a while. Now, OpenAI, DeepMind, & perhaps even Google are clearly publishing & open-sourcing considerably less. What will be the consequences on the progress of AI science and technology?

The release of ChatGPT can serve a positive purpose, said Bengio, by making the world very aware of both the promise and the risk of AI.

“The thing I like about the, sort-of, media circus around ChatGPT is that it’s a wake up call,” said Bengio. “I think people have seen the progress of AI in previous years, and many companies, many governments have thought, Yeah, okay, something is happening, and those techies are doing their thing, not realizing that very powerful systems where around the corner.”

News Article Courtesy Of »