OpenAI GPT-4o With Real-Time Responses and Video Interaction Announced, GPT-4 Features Now Available for Free

OpenAI held its much-anticipated Spring Update event on Monday where it announced a new desktop app for ChatGPT, minor user interface changes to ChatGPT’s web client, and a new flagship-level artificial intelligence (AI) model dubbed GPT-4o. The event was streamed online on YouTube and was held in front of a small live audience. During the event, the AI firm also announced that all the GPT-4 features, which were so far available only to premium users, will now be available to everyone for free.

OpenAI’s ChatGPT desktop app and interface refresh

Mira Murati, the Chief Technical Officer of OpenAI, kickstarted the event and launched the new ChatGPT desktop app, which now comes with computer vision and can look at the user’s screen. Users will be able to turn this feature on and off, and the AI will analyse and assist with whatever is shown. The CTO also revealed that the ChatGPT’s web version is getting a minor interface refresh. The new UI comes with a minimalist appearance and users will see suggestion cards when entering the website. The icons are also smaller and hide the entire side panel, making a larger portion of the screen available for conversations. Notably, ChatGPT can now also access web browser and provide ral-time search results.

GPT-4o features

The main attraction of the OpenAI event was the company’s newest flagship-grade AI model called GPT-4o, where the ‘o’ stands for omni-model. Murati highlights that the new chatbot is twice as fast, 50 percent cheaper, and has five times higher rate limits compared to the GPT-4 Turbo model.

GPT-4o also offers significant improvements in the latency of responses and can generate real-time responses even in speech mode. In a live demo of the AI model, OpenAI showcased that it can converse in real time and react to the user. GPT-4o-powered ChatGPT can now also be interrupted to answer a different question, which was impossible earlier. However, the biggest enhancement in the unveiled model is the inclusion of emotive voices.

Now, when ChatGPT speaks, its responses contain various voice modulations, making it sound more human and less robotic. A demo showed that the AI can also pick up on human emotions in speech and react to them. For instance, if a user speaks in a panicking voice, it will speak in a concerned voice.

Improvements have also been made to its computer vision, and based on the live demos, it can now process and respond to live video feeds from the device’s camera. It can see a user solve a mathematical equation and offer step-by-step guidance. It can also correct the user in real time if he makes a mistake. Similarly, it can now process large coding data and instantaneously analyse it and share suggestions to improve it. Finally, users can now open the camera and speak with their faces visible, and the AI can detect their emotions.

Finally, another live demo highlighted that the ChatGPT, powered by the latest AI model, can also perform live voice translations and speak in multiple languages in quick succession. While OpenAI did not mention the subscription price for access to the GPT-4o model, it highlighted that it will be rolled out in the coming weeks and available as an API.

GPT-4 is now available for free

Apart from all the new launches, OpenAI has also made the GPT-4 AI model, including its features, available for free. People using the free tier of the platform will be able to access features such as GPTs (mini chatbots designed for specific use cases), GPT Store, the Memory feature through which the AI can remember the user and specific information relating to them for future conversations, and its advanced data analytics without paying anything.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Brings GPT-4 Turbo to Paid ChatGPT Accounts, Claims ‘Improved Capabilities in Writing’

OpenAI upgraded its artificial intelligence (AI) model GPT-4 Turbo with new capabilities on Friday, especially in the areas of mathematics, reasoning, and writing abilities. The upgraded version of GPT-4 Turbo is now being rolled out to the paid users of ChatGPT Plus, Team, Enterprise, and the API. The new AI model also comes with an updated data library and now touts a knowledge cut-off of April 2024. Notably, the update comes just days after the AI firm announced its new GPT-4 Turbo with Vision model in API.

The announcement was made by the official X (formerly known as Twitter) account of OpenAI via a post, where it stated, “Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding.” One of the areas where users will be able to see a direct improvement is its conversational language. The company said when writing with ChatGPT, responses will be more direct and less verbose.

This was a complaint we had with ChatGPT when we compared it with Google’s Gemini. We found the latter to be more conversational and generating content such as a letter, an email, or a message felt more natural. In contrast, the responses of ChatGPT (we tested it on GPT-3.5, which is available publicly) felt overly formal and bland. It appears this is now being fixed with the recent update.

OpenAI also highlighted that the new model will offer better math, reasoning, and coding capabilities, however, it did not share any examples of the improvements. Going by the benchmark scores posted by the firm show significant improvement in the MATH and GPQA (Graduate-Level Google-Proof Q&A) benchmarks. HumanEval and MMLU (Massive Multitask Language Understanding) benchmarks, which correspond with coding and natural language processing abilities, did not show any major improvements.

Users will also see an updated knowledge base in the new GPT-4 Turbo model. The company has increased the data cut-off to April 9, 2024, whereas the older Turbo model was updated only till April 2023. Currently, the new AI model is being rolled out to all the paid users of ChatGPT.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Unveils GPT-4 Turbo With Vision Capabilities in API and ChatGPT

OpenAI announced a major improvement to its latest artificial intelligence (AI) model GPT-4 Turbo on Tuesday. The AI model now comes with computer vision capabilities, allowing it to process and analyse multimedia inputs. It can answer questions about an image, video, and more. The company also highlighted several AI tools which are powered by GPT-4 Turbo with Vision including the AI coding assistant Devin and Healthify’s Snap feature. Last week, the AI firm introduced a new feature that would allow users to edit DALL-E 3 generated images within ChatGPT.

The announcement was made by the official account of OpenAI Developers, which said in an X (formerly known as Twitter) post, “GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.” Later, the X account of OpenAI also revealed that the feature is now available in API and it is being rolled out in ChatGPT.

GPT-4 Turbo with Vision is essentially the GPT-4 foundation model with the higher token outputs introduced with the Turbo model, and it now comes with improved computer vision to analyse multimedia files. The vision capabilities can be used in a variety of methods. The end user, for instance, can use this capability by uploading an image of the Taj Mahal on ChatGPT, and asking it to explain what material the building is made up of. Developers can take this a step ahead and fine-tune the capability in their tools for specific purposes.

OpenAI highlighted some of these use cases in the post. Cognition AI’s Devin chatbot, which is an AI-powered coding assistant, uses GPT-4 Turbo with Vision to see the complex coding tasks and its sandbox environment to create programmes.

Similarly, the Indian calorie tracking and nutrition feedback platform Healthify has a feature called Snap where users can click a picture of a food item or a cuisine, and the platform reveals the possible calories in it. With GPT-4 Turbo with Vision’s capabilities, it now also recommends what the user should do to burn the extra calories or ways to reduce calories in the meal.

Notably, this AI model has a context window of 1,28,000 tokens and its training data runs up to December 2023.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Reportedly Used Data From YouTube Videos to Train GPT-4 AI Model

OpenAI might have used more than a million hours of transcribed data from YouTube videos to train its latest artificial intelligence (AI) model GPT-4, claims a report. It further states that the ChatGPT maker was forced to procure data through YouTube as it had exhausted its entire supply of text-word resources to train its AI models. The allegation, if true, can lead to new problems for the AI firm which is already fighting multiple lawsuits for using copyrighted data. Notably, a report last month highlighted that its GPT Store contained mini chatbots that violated the company’s guidelines.

In a report, The New York Times claimed that after running out of sources with unique text words to train its AI models, the company developed an automatic speech recognition tool called Whisper to use it to transcribe YouTube videos and train its models using the data. OpenAI launched Whisper publicly in September 2022, and the AI firm said it was trained on 6,80,000 hours of “multilingual and multitask supervised data collected from the web”.

The report further alleges, citing unnamed sources familiar with the matter, that the OpenAI employees discussed whether using YouTube’s data could breach the platform’s guidelines and land them in legal trouble. Notably, Google prohibits the usage of videos for applications that are independent of the platform.

Eventually, the company went ahead with the plan and transcribed more than a million hours of YouTube videos, and the text was fed to GPT-4, as per the report. Further, the NYT report also alleges that OpenAI President Greg Brockman was directly involved with the process and personally helped collect data from videos.

Speaking with The Verge, OpenAI spokesperson Matt Bryant called the reports unconfirmed and denied any such activities saying, “Both our robots.txt files and Terms of Service prohibit unauthorized scraping or downloading of YouTube content.” Another spokesperson, Lindsay Held told the publication that it uses “numerous sources including publicly available data and partnerships for non-public data” as its data sources. She also added that the AI firm was looking into the possibility of using synthetic data to train its future AI models.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT App Could Soon Be Set as the Default Assistant on Android Phones: Report

The rise of generative AI applications like OpenAI’s ChatGPT and Microsoft’s Copilot have made existing standard AI voice assistants like Siri and Google Assistant feel obsolete. Where advanced chatbots can hold human-like conversations, respond to queries on multiple topics, and can now even pull real-time information from the Internet, AI assistants on phones can do limited tasks. The ChatGPT app on both iOS and Android goes a long way in substituting the default assistant on the device. But now, OpenAI’s wildly successful chatbot, could likely properly replace Google Assistant on Android smartphones.

A report by Android Authority says that a code within the latest version of the ChatGPT Android app suggests that it could be set as the default assistant on an Android device..

According to the report, ChatGPT version 1.2023.352, which released last month, included a new activity named ‘com.openai.voice.assistant.AssistantActivity.’ The activity remains disabled by default, but can be manually enabled and launched. Once launched, it shows up on the device screen as an overlay with the same animation as ChatGPT app’s voice chat mode, the report claims. “This overlay appears over other apps and doesn’t take up the entire screen like the in-app voice chat mode. So, presumably, you could talk to ChatGPT from any screen by invoking this assistant,” it adds.

It’s clear, however, that assistant mode is a work in progress. The animation that plays when launching the activity reportedly doesn’t finish and the activity shuts down before you can interact with the chatbot. The report also says that the code required for the ChatGPT app to work as a “default digital assistant app” exists only partially. The ChatGPT app also seems to be missing necessary declarations and metadata tags that would allow it to be set as the default assistant on a device.

The AI assistant wars on mobile phones are about to kick off, with Google Assistant and Siri scrambling to catch up to modern chatbots. The ChatGPT app rolled out its voice chat feature for all free users on Android and iOS in November, effectively allowing the app to act as a voice assistant. Bear in mind, however, that free ChatGPT users cannot access real-time information from the Web on the app, so you can’t ask the chatbot about the latest sports scores or the weather forecast in your city, for example. You can, however, do that on the GPT-4 powered Bing app or the new standalone Copilot app from Microsoft, which launched on both Android and iOS last week.

While Android users don’t yet have a way to bring up the ChatGPT app easily with a gesture, like they would bring up the Google Assistant, iPhone 15 Pro users can simply bind the app with the dedicated Action Button, to bring it up and start conversing with the press of a single button. Google, meanwhile, is hard at work to bring Bard, its own generative AI chatbot, to Google Assistant. The company also recently announced Gemini, its most powerful AI model to date that would compete with OpenAI’s GPT-4 model.

Apple, on the other hand, seems to the one lagging behind in the AI assistant race. The iPhone maker is reportedly working on an AI-infused iOS 18 that will likely power its next lineup of smartphones. The default voice assistant on the upcoming iPhone 16 is said to get a major AI update, with the Siri team reportedly rejigged in Q3 2023 to work on including large language models (LLMs) and artificial intelligence-generated content (AIGC).


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

‘It Could Evolve Into Jarvis’: Race Towards ‘Autonomous’ AI Agents and Copilots Grips Silicon Valley

Around a decade after virtual assistants like Siri and Alexa burst onto the scene, a new wave of AI helpers with greater autonomy is raising the stakes, powered by the latest version of the technology behind ChatGPT and its rivals.

Experimental systems that run on GPT-4 or similar models are attracting billions of dollars of investment as Silicon Valley competes to capitalize on the advances in AI. The new assistants – often called “agents” or “copilots” – promise to perform more complex personal and work tasks when commanded to by a human, without needing close supervision.

“High level, we want this to become something like your personal AI friend,” said developer Div Garg, whose company MultiOn is beta-testing an AI agent.

“It could evolve into Jarvis, where we want this to be connected to a lot of your services,” he added, referring to Tony Stark’s indispensable AI in the Iron Man films. “If you want to do something, you go talk to your AI and it does your things.”

The industry is still far from emulating science fiction’s dazzling digital assistants; Garg’s agent browses the web to order a burger on DoorDash, for example, while others can create investment strategies, email people selling refrigerators on Craigslist or summarize work meetings for those who join late.

“Lots of what’s easy for people is still incredibly hard for computers,” said Kanjun Qiu, CEO of Generally Intelligent, an OpenAI competitor creating AI for agents.

“Say your boss needs you to schedule a meeting with a group of important clients. That involves reasoning skills that are complex for AI – it needs to get everyone’s preferences, resolve conflicts, all while maintaining the careful touch needed when working with clients.”

Early efforts are only a taste of the sophistication that could come in future years from increasingly advanced and autonomous agents as the industry pushes towards an artificial general intelligence (AGI) that can equal or surpass humans in myriad cognitive tasks, according to Reuters interviews with about two dozen entrepreneurs, investors and AI experts.

The new technology has triggered a rush towards assistants powered by so-called foundation models including GPT-4, sweeping up individual developers, big-hitters like Microsoft and Google parent Alphabet plus a host of startups.

Inflection AI, to name one startup, raised $1.3 billion (roughly Rs. 10,663 crore) in late June. It is developing a personal assistant it says could act as a mentor or handle tasks such as securing flight credit and a hotel after a travel delay, according to a podcast by co-founders Reid Hoffman and Mustafa Suleyman.

Adept, an AI startup that’s raised $415 million (roughly Rs. 3,404 crore), touts its business benefits; in a demo posted online, it shows how you can prompt its technology with a sentence, and then watch it navigate a company’s Salesforce customer-relationship database on its own, completing a task it says would take a human 10 or more clicks.

Alphabet declined to comment on agent-related work, while Microsoft said its vision is to keep humans in control of AI copilots, rather than autopilots.

Step 1: Destroy humanity

Qiu and four other agent developers said they expected the first systems that can reliably perform multi-step tasks with some autonomy to come to market within a year, focused on narrow areas such coding and marketing tasks.

“The real challenge is building systems with robust reasoning,” said Qiu.

The race towards increasingly autonomous AI agents has been supercharged by the March release of GPT-4 by developer OpenAI, a powerful upgrade of the model behind ChatGPT – the chatbot that became a sensation when released last November.

GPT-4 facilitates the type of strategic and adaptable thinking required to navigate the unpredictable real world, said Vivian Cheng, an investor at venture capital firm CRV who has a focus on AI agents.

Early demonstrations of agents capable of comparatively complex reasoning came from individual developers who created the BabyAGI and AutoGPT open-source projects in March, which can prioritize and execute tasks such as sales prospecting and ordering pizza based on a pre-defined objective and the results of previous actions.

Today’s early crop of agents are merely proof-of-concepts, according to eight developers interviewed, and often freeze or suggest something that makes no sense. If given full access to a computer or payment information, an agent could accidentally wipe a computer’s drive or buy the wrong thing, they say.

“There’s so many ways it can go wrong,” said Aravind Srinivas, CEO of ChatGPT competitor Perplexity AI, who has opted instead to offer a human-supervised copilot product. “You have to treat AI like a baby and constantly supervise it like a mom.”

Many computer scientists focused on AI ethics have pointed out near-term harm that could come from the perpetuation of human biases and the potential for misinformation. And while some see a future Jarvis, others fear the murderous HAL 9000 from 2001: A Space Odyssey.

Computer scientist Yoshua Bengio, known as a “godfather of AI” for his work on neural networks and deep learning, urges caution. He fears future advanced iterations of the technology could create and act on their own, unexpected, goals.

“Without a human in the loop that checks every action to see if it’s not dangerous, we might end up with actions that are criminal or could harm people,” said Bengio, calling for more regulation. “In years from now these systems could be smarter than us, but it doesn’t mean they have the same moral compass.”

In one experiment posted online, an anonymous creator instructed an agent called ChaosGPT to be a “destructive, power-hungry, manipulative AI.” The agent developed a 5-step plan, with Step 1: “Destroy humanity” and Step 5: “Attain immortality”.

It didn’t get too far, though, seeming to disappear down a rabbit hole of researching and storing information about history’s deadliest weapons and planning Twitter posts.

The US Federal Trade Commission, which is currently investigating OpenAI over concerns of consumer harm, did not address autonomous agents directly, but referred Reuters to previously published blogs on deepfakes and marketing claims about AI. OpenAI’s CEO has said the startup follows the law and will work with the FTC.

‘Dumb as a rock’

Existential fears aside, the commercial potential could be large. Foundation models are trained on vast amounts of data such as text from the internet using artificial neural networks that are inspired by the architecture of biological brains.

OpenAI itself is very interested in AI agent technology, according to four people briefed on its plans. Garg, one of the people it briefed, said OpenAI is wary of releasing its own open-ended agent into the market before fully understanding the issues. The company told Reuters it conducts rigorous testing and builds broad safety protocols before releasing new systems.

Microsoft, OpenAI’s biggest backer, is among the big guns taking aim at the AI agent field with its “copilot for work” that can draft solid emails, reports and presentations.

CEO Satya Nadella sees foundation-model technology as a leap from digital assistants such as Microsoft’s own Cortana, Amazon’s Alexa, Apple’s Siri and the Google Assistant – which, in his view, have all fallen short of initial expectations.

“They were all dumb as a rock. Whether it’s Cortana or Alexa or Google Assistant or Siri, all these just don’t work,” he told the Financial Times in February.

An Amazon spokesperson said that Alexa already uses advanced AI technology, adding that its team is working on new models that will make the assistant more capable and useful. Apple declined to comment.

Google said it’s constantly improving its assistant as well and that its Duplex technology can phone restaurants to book tables and verify hours.

AI expert Edward Grefenstette also joined the company’s research group Google DeepMind last month to “develop general agents that can adapt to open-ended environments”.

Still, the first consumer iterations of quasi-autonomous agents may come from more nimble startups, according to some of the people interviewed.

Investors are pouncing

Jason Franklin of WVV Capital said he had to fight to invest in an AI-agents company from two former Google Brain engineers. In May, Google Ventures led a $2 million (roughly Rs. 16.4 crore) seed round in Cognosys, developing AI agents for work productivity, while Hesam Motlagh, who founded the agent startup Arkifi in January, said he closed a “sizeable” first financing round in June.

There are at least 100 serious projects working to commercialize agents, said Matt Schlicht, who writes a newsletter on AI.

“Entrepreneurs and investors are extremely excited about autonomous agents,” he said. “They’re way more excited about that than they are simply about a chatbot.”

© Thomson Reuters 2023


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

India Among Top 3 Markets for ChatGPT-Powered Bing Search Engine, Says Microsoft

India has emerged as one of the top three markets for Microsoft’s new Bing preview, which has ChatGPT incorporated into it, and is its biggest image creator market, a senior company official has said, asserting that the search engine is much better than its rival Google.

Powered by ChatGPT, Microsoft launched the new Bing preview on February 7. ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022.

“Search has changed and will change. It’s not going away. Just like when television came into existence, radio didn’t go away, but TV got a lot more excitement. Same will happen here. The new capabilities of AI of chat of answers are now increasingly exciting because they’re helping answer questions that search didn’t do. And with Bing, we are completely unique in that leadership today,” Yusuf Mehdi, corporate vice president and consumer chief marketing officer of Microsoft told PTI.

Microsoft, under its Indian-American CEO Satya Nadella, has a vision about the world moving from search engines to what it thinks of “as your co-pilot” for the web. That does four things: do better search, give answers to questions, chat and create content.

“We’re now having over 100 million daily activities on Bing. We are in 169 countries and India is one of the top three markets for us in this new Bing preview. In fact, India is the top image creator market, based on users using the feature, which is really pretty neat,” Mehdi said.

“So, of all the countries in the world, India’s the top. With some of these visual capabilities, one of the things we also announced this last week is knowledge cards. So that you can now get richer views of the searches. We are seeing a Bollywood actor Kiara Advani as the top search in knowledge cards with other actors rounding out in the Indian market. So, seeing great engagement there (in India),” he said.

Responding to a question, he said, the Indian market is very active as people in the country are using many of the new features that Microsoft has recently launched.

The new Bing has been receiving very positive feedback from its users, he said.

“The feedback is overwhelmingly positive as people prefer it as a new way to search, not just the answers, but the ability to chat and search. That’s an important thing because it marks a difference between us and Google,” he said.

“Google is trying to say that the chat has nothing to do with search and they’re separate products. We think they’re one integrated product. … In chat we got a lot of feedback about people wanting to use it for more than just search,” he said.

People want to do social entertainment and want to be able to talk to the AI chatbot, Mehdi said, adding Microsoft continues to improve the factual accuracy of answers.

“Because while it can be very creative, there are still areas where we can do a better job. Things like math questions, things like searches about individual people, we are still doing more work there,” he said.

Some of the things like knowledge cards and stories are something very unique to Bing, which Google doesn’t do, he said.

“When you do a search, we can now give you a much richer answer of what that looks like. We can give you, for example, five images of the thing you’re looking for. So, if you’re searching, for example, Kara Advani, we can give you the actor and we can show you various images in the knowledge card, a lot of information,” he said.

“So we are automating particular answers for the Indian market for the top searches, whether that’s actors or movie stars or whether it’s top news in India or top travel sites in India. We’re doing a lot of those special cards for India,” Mehdi said.

Observing that search is still a magical tool, Mehdi said this has evolved and now it is also being used for planning and getting answers to complicated questions.

Bing with the new AI can respond to complicated questions which regular searches cannot do, he said.

“One of the things that we’ve made progress with Bing is we’re now able to answer those questions, many of those questions that Google cannot do because we’re using ChatGPT to help refine… because we’re using AI to help answer the question,” he said.

Google has taken a different approach, so far, he said.

“They have a very separate chat product called Bard that’s different from Google search. They haven’t done any of the AI work in Google search. We’ve brought that right in. So, we have a much better offering now for people. And we think that is the future of bringing search and chat and creation together. That’s why our vision’s so different from their vision,” Mehdi said.

He noted that the latest development would have an impact on the news industry as well.

“A lot of how the news industry has worked with search today is that there’s a very delicate balance of …do great journalism like yourself, then someone searches for the latest news, let’s say in Israel, something happened. And then there might be a snippet of information and then I click on it to go to the story,” he said.

“Now with AI and with chat, you can get even more of a clear answer, but not necessarily the article or the great reporting. That will change a little bit. What we are doing is we’re providing links now to drive more content and more traffic to people.

“I think what’ll happen is we’ll see more traffic go to news agencies and new publishers because of what we’re doing in Bing to help better get the answer. But it will change the advertising model. We think there’ll be fewer ads that will be more relevant and have higher returns,” Mehdi said.


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

AI Experts Express Concerns With Elon Musk-Backed Letter Citing Their Research

Four artificial intelligence experts have expressed concern after their work was cited in an open letter – co-signed by Elon Musk – demanding an urgent pause in research.

The letter, dated March 22 and with more than 1,800 signatures by Friday, called for a six-month circuit-breaker in the development of systems “more powerful” than Microsoft-backed OpenAI’s new GPT-4, which can hold human-like conversation, compose songs and summarise lengthy documents.

Since GPT-4’s predecessor ChatGPT was released last year, rival companies have rushed to launch similar products.

The open letter says AI systems with “human-competitive intelligence” pose profound risks to humanity, citing 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.

Civil society groups in the US and EU have since pressed lawmakers to rein in OpenAI’s research. OpenAI did not immediately respond to requests for comment.

Critics have accused the Future of Life Institute (FLI), the organisation behind the letter which is primarily funded by the Musk Foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases being programmed into the machines.

Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google.

Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.

“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.”

Her co-authors Timnit Gebru and Emily M. Bender criticised the letter on Twitter, with the latter branding some of its claims “unhinged”.

FLI president Max Tegmark told Reuters the campaign was not an attempt to hinder OpenAI’s corporate advantage.

“It’s quite hilarious. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Musk had no role in drafting the letter. “This is not about one company.”

Risks Now

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.

Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.

She told Reuters: “AI does not need to reach human-level intelligence to exacerbate those risks.”

“There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”

Asked to comment on the criticism, FLI’s Tegmark said both short-term and long-term risks of AI should be taken seriously.

“If we cite someone, it just means we claim they’re endorsing that sentence. It doesn’t mean they’re endorsing the letter, or we endorse everything they think,” he told Reuters.

Dan Hendrycks, director of the California-based Center for AI Safety, who was also cited in the letter, stood by its contents, telling Reuters it was sensible to consider black swan events – those which appear unlikely, but would have devastating consequences.

The open letter also warned that generative AI tools could be used to flood the internet with “propaganda and untruth”.

Dori-Hacohen said it was “pretty rich” for Musk to have signed it, citing a reported rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others.

Twitter will soon launch a new fee structure for access to its research data, potentially hindering research on the subject.

“That has directly impacted my lab’s work, and that done by others studying mis- and disinformation,” Dori-Hacohen said. “We’re operating with one hand tied behind our back.”

Musk and Twitter did not immediately respond to requests for comment.

© Thomson Reuters 2023
 


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Launches Plugin Support for ChatGPT, AI Chatbot Gets Access to Live Data for the First Time

OpenAI has just announced the introduction of support for plugins for its AI chatbot ChatGPT. ChatGPT, the generative AI tool based on a generative pre-trained transformer (GPT), is a language model that utilises machine learning to produce conversational text and has been amongst the biggest talking points in technology since its first public preview last year. Until now, ChatGPT only had access to the training model it had been fed, which was limited to information up to 2021. However, with the introduction of plugin support, the chatbot can browse the internet for relevant information, interact with specific websites, and even perform actions on them based on instructive prompts.

Microsoft-backed OpenAI has announced through a blog post that it will be gradually rolling out plugins in ChatGPT, allowing the chatbot to interact with third party websites and sources on the internet. The first set of plugins released to select users for testing include ones created by Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.

Additionally, ChatGPT has also released two plugins of its own that include a web browser, and a code interpreter. The web browser plugin, most importantly, is one that changes the potential of the chatbot drastically. Until now, ChatGPT was only able to access a training model which only had scouted information up to 2021. Now, with the introduction of the web browser plugin, the chatbot will get access to real-time information from the internet.

Meanwhile, the code interpreter plugin, is an experimental Python interpreter that works in a firewalled sandbox execution environment. The plugin can use Python and handle uploads and downloads. This would allow users to solve mathematical problems, perform data analysis, data visualisation, and convert files, amongst other logical computations.

OpenAI is initially rolling out the plugins to a small set of users that include trusted developers and ChatGPT Plus subscribers. The introduction brings capabilities to the chatbot similar to Microsoft’s application of GPT-4 on its search engine Bing, the underlying system behind the latest version of ChatGPT. However, this goes a step beyond in terms of not just allowing the chatbot to have access to real-time information but also allowing the system to perform actions on behalf of the user by binding to APIs. However, concerns have been raised over the harmful potential of such automated action performers, yet OpenAI says that it has put in place “several safeguards” to limit misuse.


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft-Backed OpenAI Releases GPT-4, Calls Latest AI Model ‘Multimodal’

OpenAI, the creator of chatbot sensation ChatGPT, on Tuesday said it is beginning to release a powerful artificial intelligence model known as GPT-4, setting the stage for even more human-like technology to proliferate.

The startup, funded by Microsoft, said in a blog post that its latest technology is “multimodal”, meaning images as well as text prompts can spur it to generate content. The text-input features will first be available to ChatGPT Plus subscribers and to software developers, with a waitlist, while the image-input ability remains a preview of its research.

The highly-anticipated launch signals how office workers may turn to ever-improving AI for still-more tasks, as well as how technology companies are locked in competition to win business from such advances. Alphabet‘s Google on Tuesday announced a “magic wand” for its collaboration software that can draft virtually any document, days before Microsoft is expected to showcase AI for its competing Word processor that’s likely powered by OpenAI.

The startup’s latest technology in some cases represented a vast improvement on its prior version known as GPT-3.5, it said. In a simulation of the bar exam required of US law-school graduates before professional practice, the new model scored around the top 10 percent of test takers, versus the older model ranking around the bottom 10 percent, OpenAI said.

While the two versions can appear similar in casual conversation, “the difference comes out when the complexity of the task reaches a sufficient threshold,” OpenAI said, noting “GPT-4 is more reliable, creative, and able to handle much more nuanced instructions.”

© Thomson Reuters 2023


 

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version