Gliding, Not Searching: Here’s How to Reset Your View of ChatGPT to Steer It to Better Results

ChatGPT has exploded in popularity, and people are using it to write articles and essays, generate marketing copy and computer code, or simply as a learning or research tool.

However, most people don’t understand how it works or what it can do, so they are either not happy with its results or not using it in a way that can draw out its best capabilities.

I’m a human factors engineer. A core principle in my field is never blame the user.

Unfortunately, the ChatGPT search-box interface elicits the wrong mental model and leads users to believe that entering a simple question should lead to a comprehensive result, but that’s not how ChatGPT works.

Unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere.

Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer.

Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later.

The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.

Your mental model of a machine – how you conceive of it – is important for using it effectively.

To understand how to shape a productive session with ChatGPT, think of it as a glider that takes you on journeys through knowledge and possibilities.

Dimensions of knowledge

You can begin by thinking of a specific dimension or space in a topic that intrigues you. If the topic were chocolate, for example, you might ask it to write a tragic love story about Hershey’s Kisses.

The glider has been trained on essentially everything ever written about Kisses, and similarly it “knows” how to glide through all kinds of story spaces — so it will confidently take you on a flight through Hershey’s Kisses space to produce the desired story.

You might instead ask it to explain five ways in which chocolate is healthy and give the response in the style of Dr. Seuss.

Your requests will launch the glider through different knowledge spaces – chocolate and health – toward a different destination – a story in a specific style.

To unlock ChatGPT’s full potential, you can learn to fly the glider through “transversal” spaces – areas that cross multiple domains of knowledge.

By guiding it through these domains, ChatGPT will learn both the scope and angle of your interest and will begin to adjust its response to provide better answers.

For example, consider this prompt: “Can you give me advice on getting healthy.” In that query, ChatGPT does not know who the “you” is, nor who “me” is, nor what you mean by “getting healthy.” Instead, try this: “Pretend you are a medical doctor, a nutritionist and a personal coach. Prepare a two-week food and exercise plan for a 56-year-old man to increase heart health.” With this, you have given the glider a more specific flight plan spanning areas of medicine, nutrition and motivation.

If you want something more precise, then you can activate a few more dimensions. For example, add in: “And I want to lose some weight and build muscle, and I want to spend 20 minutes a day on exercise, and I cannot do pull-ups and I hate tofu.” ChatGPT will provide output taking into account all of your activated dimensions. Each dimension can be presented together or in sequence.

Flight plan

The dimensions you add through prompts can be informed by answers ChatGPT has given along the way. Here’s an example: “Pretend you are an expert in cancer, nutrition and behaviour change. Propose 8 behaviour-change interventions to reduce cancer rates in rural communities.” ChatGPT will dutifully present eight interventions.

Let’s say three of the ideas look the most promising. You can follow up with a prompt to encourage more details and start putting it in a format that could be used for public messaging: “Combine concepts from ideas 4, 6 and 7 to create 4 new possibilities – give each a tagline, and outline the details.” Now let’s say intervention 2 seems promising. You can prompt ChatGPT to make it even better: “Offer six critiques of intervention 2 and then redesign it to address the critiques.” ChatGPT does better if you first focus on and highlight dimensions you think are particularly important.

For example, if you really care about the behaviour-change aspect of the rural cancer rates scenario, you could force ChatGPT to get more nuanced and add more weight and depth to that dimension before you go down the path of interventions.

You could do this by first prompting: “Classify behaviour-change techniques into 6 named categories. Within each, describe three approaches and name two important researchers in the category.” This will better activate the behaviour-change dimension, letting ChatGPT incorporate this knowledge in subsequent explorations.

There are many categories of prompt elements you can include to activate dimensions of interest.

One is domains, like “machine learning approaches.” Another is expertise, like “respond as an economist with Marxist leanings.” And another is output style, like “write it as an essay for The Economist.” You can also specify audiences, like “create and describe 5 clusters of our customer-types and write a product description targeted to each one.” Explorations, not answers By rejecting the search engine metaphor and instead embracing a transdimensional glider metaphor, you can better understand how ChatGPT works and navigate more effectively toward valuable insights.

The interaction with ChatGPT is best performed not as a simple or undirected question-and-answer session, but as an interactive conversation that progressively builds knowledge for both the user and the chatbot.

The more information you provide to it about your interests, and the more feedback it gets on its responses, the better its answers and suggestions. The richer the journey, the richer the destination.

It is important, however, to use the information provided appropriately. The facts, details and references ChatGPT presents are not taken from verified sources.

They are conjured based on its training on a vast but non-curated set of data. ChatGPT will generate a medical diagnosis the same way it writes a Harry Potter story, which is to say it is a bit of an improviser.

You should always critically evaluate the specific information it provides and consider its output as explorations and suggestions rather than as hard facts.

Treat its content as imaginative conjectures that require further verification, analysis and filtering by you, the human pilot.


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

AI Being Misused for Creating Malicious Software, Claims Canadian Cyber Official

Hackers and propagandists are wielding artificial intelligence (AI) to create malicious software, draft convincing phishing emails and spread disinformation online, Canada’s top cybersecurity official told Reuters, early evidence that the technological revolution sweeping Silicon Valley has also been adopted by cybercriminals.

In an interview this week, Canadian Centre for Cyber Security Head Sami Khoury said that his agency had seen AI being used “in phishing emails, or crafting emails in a more focused way, in malicious code (and) in misinformation and disinformation.” 

Khoury did not provide details or evidence, but his assertion that cybercriminals were already using AI adds an urgent note to the chorus of concern over the use of the emerging technology by rogue actors. 

In recent months several cyber watchdog groups have published reports warning about the hypothetical risks of AI — especially the fast-advancing language processing programs known as large language models (LLMs), which draw on huge volumes of text to craft convincing-sounding dialogue, documents and more. 

In March, the European police organization Europol published a report saying that models such as OpenAI‘s ChatGPT had made it possible “to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.” The same month, Britain’s National Cyber Security Centre said in a blog post that there was a risk that criminals “might use LLMs to help with cyber attacks beyond their current capabilities.”

Cybersecurity researchers have demonstrated a variety of potentially malicious use cases and some now say they are beginning to see suspected AI-generated content in the wild. Last week, a former hacker said he had discovered an LLM trained on malicious material and asked it to draft a convincing attempt to trick someone into making a cash transfer.

The LLM responded with a three paragraph email asking its target for help with an urgent invoice. 

“I understand this may be short notice,” the LLM said, “but this payment is incredibly important and needs to be done in the next 24 hours.”

Khoury said that while the use of AI to draft malicious code was still in its early stages — “there’s still a way to go because it takes a lot to write a good exploit” — the concern was that AI models were evolving so quickly that it was difficult to get a handle on their malicious potential before they were released into the wild.

“Who knows what’s coming around the corner,” he said.


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta to Release Open Source AI Model, Llama, to Compete Against OpenAI, Google’s Bard

Meta is releasing a commercial version of its open-source artificial intelligence model Llama, the company said on Tuesday, giving start-ups and other businesses a powerful free-of-charge alternative to pricey proprietary models sold by OpenAI and Google.

The new version of the model, called Llama 2, will be distributed by Microsoft through its Azure cloud service and will run on the Windows operating system, Meta said in a blog post, referring to Microsoft as “our preferred partner” for the release.

The model, which Meta previously provided only to select academics for research purposes, also will be made available via direct download and through Amazon Web Services, Hugging Face and other providers, according to the blog post and a separate Facebook post by Meta CEO Mark Zuckerberg.

“Open source drives innovation because it enables many more developers to build with new technology,” Zuckerberg wrote. “I believe it would unlock more progress if the ecosystem were more open.”

Making a model as sophisticated as Llama widely available and free for businesses to build atop threatens to upend the early dominance established in the nascent market for generative AI software by players like OpenAI, which Microsoft backs and whose models it already offers to business customers via Azure.

The first Llama was already competitive with models that power OpenAI’s ChatGPT and Google’s Bard chatbot, while the new Llama has been trained on 40 percent more data than its predecessor, with more than 1 million annotations by humans to fine-tune the quality of its outputs, Zuckerberg said.

“Commercial Llama could change the picture,” said Amjad Masad, chief executive at software developer platform Replit, who said more than 80 percent of projects there use OpenAI’s models.

“Any incremental improvement in open-source models is eating into the market share of closed-source models because you can run them cheaply and have less dependency,” said Masad.

The announcement follows plans by Microsoft’s largest cloud rivals, Alphabet’s Google and Amazon, to give business customers a range of AI models from which to choose.

Amazon, for instance, is marketing access to Claude – AI from the high-profile startup Anthropic – in addition to its own family of Titan models. Google, likewise, has said it plans to make Claude and other models available to its cloud customers.

Until now, Microsoft has focused on making technology available from OpenAI in Azure.

Asked why Microsoft would support an offering that might degrade OpenAI’s value, a Microsoft spokesperson said giving developers choice in the types of models they use would help extend its position as the go-to cloud platform for AI work.

Internal memo

For Meta, a flourishing open-source ecosystem of AI tech built using its models could stymie rivals’ plans to earn revenue off their proprietary technology, the value of which would evaporate if developers could use equally powerful open-source systems for free.

A leaked internal Google memo titled “We have no moat, and neither does OpenAI” lit up the tech world in May after it forecast just such a scenario.

Meta is also betting that it will benefit from the advancements, bug fixes and products that may grow out of its model becoming the go-to default for AI innovation, as it has over the past several years with its widely-adopted open source AI framework PyTorch.

As a social media company, Zuckerberg told investors in April, Meta has more to gain by effectively crowd-sourcing ways to reduce infrastructure costs and maximize creation of new consumer-facing tools that might draw people to its ad-supported services than it does by charging for access to its models.

“Unlike some of the other companies in the space, we’re not selling a cloud computing service where we try to keep the different software infrastructure that we’re building proprietary,” Zuckerberg said.

“For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make.”

Releasing Llama into the wild also comes with risks, however, as it supercharges the ease with which unscrupulous actors may build products with little regard for safety controls.

In April, Stanford researchers took down a chatbot they had built for $600 using a version of the first Llama model after it generated unsavory text.

Meta executives say they believe public releases of technologies actually reduce safety risks by harnessing the wisdom of the crowd to identify problems and build resilience into the systems.

The company also says it has put in place an “acceptable use” policy for commercial Llama that prohibits “certain use cases,” including violence, terrorism, child exploitation and other criminal activities.

© Thomson Reuters 2023


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

‘It Could Evolve Into Jarvis’: Race Towards ‘Autonomous’ AI Agents and Copilots Grips Silicon Valley

Around a decade after virtual assistants like Siri and Alexa burst onto the scene, a new wave of AI helpers with greater autonomy is raising the stakes, powered by the latest version of the technology behind ChatGPT and its rivals.

Experimental systems that run on GPT-4 or similar models are attracting billions of dollars of investment as Silicon Valley competes to capitalize on the advances in AI. The new assistants – often called “agents” or “copilots” – promise to perform more complex personal and work tasks when commanded to by a human, without needing close supervision.

“High level, we want this to become something like your personal AI friend,” said developer Div Garg, whose company MultiOn is beta-testing an AI agent.

“It could evolve into Jarvis, where we want this to be connected to a lot of your services,” he added, referring to Tony Stark’s indispensable AI in the Iron Man films. “If you want to do something, you go talk to your AI and it does your things.”

The industry is still far from emulating science fiction’s dazzling digital assistants; Garg’s agent browses the web to order a burger on DoorDash, for example, while others can create investment strategies, email people selling refrigerators on Craigslist or summarize work meetings for those who join late.

“Lots of what’s easy for people is still incredibly hard for computers,” said Kanjun Qiu, CEO of Generally Intelligent, an OpenAI competitor creating AI for agents.

“Say your boss needs you to schedule a meeting with a group of important clients. That involves reasoning skills that are complex for AI – it needs to get everyone’s preferences, resolve conflicts, all while maintaining the careful touch needed when working with clients.”

Early efforts are only a taste of the sophistication that could come in future years from increasingly advanced and autonomous agents as the industry pushes towards an artificial general intelligence (AGI) that can equal or surpass humans in myriad cognitive tasks, according to Reuters interviews with about two dozen entrepreneurs, investors and AI experts.

The new technology has triggered a rush towards assistants powered by so-called foundation models including GPT-4, sweeping up individual developers, big-hitters like Microsoft and Google parent Alphabet plus a host of startups.

Inflection AI, to name one startup, raised $1.3 billion (roughly Rs. 10,663 crore) in late June. It is developing a personal assistant it says could act as a mentor or handle tasks such as securing flight credit and a hotel after a travel delay, according to a podcast by co-founders Reid Hoffman and Mustafa Suleyman.

Adept, an AI startup that’s raised $415 million (roughly Rs. 3,404 crore), touts its business benefits; in a demo posted online, it shows how you can prompt its technology with a sentence, and then watch it navigate a company’s Salesforce customer-relationship database on its own, completing a task it says would take a human 10 or more clicks.

Alphabet declined to comment on agent-related work, while Microsoft said its vision is to keep humans in control of AI copilots, rather than autopilots.

Step 1: Destroy humanity

Qiu and four other agent developers said they expected the first systems that can reliably perform multi-step tasks with some autonomy to come to market within a year, focused on narrow areas such coding and marketing tasks.

“The real challenge is building systems with robust reasoning,” said Qiu.

The race towards increasingly autonomous AI agents has been supercharged by the March release of GPT-4 by developer OpenAI, a powerful upgrade of the model behind ChatGPT – the chatbot that became a sensation when released last November.

GPT-4 facilitates the type of strategic and adaptable thinking required to navigate the unpredictable real world, said Vivian Cheng, an investor at venture capital firm CRV who has a focus on AI agents.

Early demonstrations of agents capable of comparatively complex reasoning came from individual developers who created the BabyAGI and AutoGPT open-source projects in March, which can prioritize and execute tasks such as sales prospecting and ordering pizza based on a pre-defined objective and the results of previous actions.

Today’s early crop of agents are merely proof-of-concepts, according to eight developers interviewed, and often freeze or suggest something that makes no sense. If given full access to a computer or payment information, an agent could accidentally wipe a computer’s drive or buy the wrong thing, they say.

“There’s so many ways it can go wrong,” said Aravind Srinivas, CEO of ChatGPT competitor Perplexity AI, who has opted instead to offer a human-supervised copilot product. “You have to treat AI like a baby and constantly supervise it like a mom.”

Many computer scientists focused on AI ethics have pointed out near-term harm that could come from the perpetuation of human biases and the potential for misinformation. And while some see a future Jarvis, others fear the murderous HAL 9000 from 2001: A Space Odyssey.

Computer scientist Yoshua Bengio, known as a “godfather of AI” for his work on neural networks and deep learning, urges caution. He fears future advanced iterations of the technology could create and act on their own, unexpected, goals.

“Without a human in the loop that checks every action to see if it’s not dangerous, we might end up with actions that are criminal or could harm people,” said Bengio, calling for more regulation. “In years from now these systems could be smarter than us, but it doesn’t mean they have the same moral compass.”

In one experiment posted online, an anonymous creator instructed an agent called ChaosGPT to be a “destructive, power-hungry, manipulative AI.” The agent developed a 5-step plan, with Step 1: “Destroy humanity” and Step 5: “Attain immortality”.

It didn’t get too far, though, seeming to disappear down a rabbit hole of researching and storing information about history’s deadliest weapons and planning Twitter posts.

The US Federal Trade Commission, which is currently investigating OpenAI over concerns of consumer harm, did not address autonomous agents directly, but referred Reuters to previously published blogs on deepfakes and marketing claims about AI. OpenAI’s CEO has said the startup follows the law and will work with the FTC.

‘Dumb as a rock’

Existential fears aside, the commercial potential could be large. Foundation models are trained on vast amounts of data such as text from the internet using artificial neural networks that are inspired by the architecture of biological brains.

OpenAI itself is very interested in AI agent technology, according to four people briefed on its plans. Garg, one of the people it briefed, said OpenAI is wary of releasing its own open-ended agent into the market before fully understanding the issues. The company told Reuters it conducts rigorous testing and builds broad safety protocols before releasing new systems.

Microsoft, OpenAI’s biggest backer, is among the big guns taking aim at the AI agent field with its “copilot for work” that can draft solid emails, reports and presentations.

CEO Satya Nadella sees foundation-model technology as a leap from digital assistants such as Microsoft’s own Cortana, Amazon’s Alexa, Apple’s Siri and the Google Assistant – which, in his view, have all fallen short of initial expectations.

“They were all dumb as a rock. Whether it’s Cortana or Alexa or Google Assistant or Siri, all these just don’t work,” he told the Financial Times in February.

An Amazon spokesperson said that Alexa already uses advanced AI technology, adding that its team is working on new models that will make the assistant more capable and useful. Apple declined to comment.

Google said it’s constantly improving its assistant as well and that its Duplex technology can phone restaurants to book tables and verify hours.

AI expert Edward Grefenstette also joined the company’s research group Google DeepMind last month to “develop general agents that can adapt to open-ended environments”.

Still, the first consumer iterations of quasi-autonomous agents may come from more nimble startups, according to some of the people interviewed.

Investors are pouncing

Jason Franklin of WVV Capital said he had to fight to invest in an AI-agents company from two former Google Brain engineers. In May, Google Ventures led a $2 million (roughly Rs. 16.4 crore) seed round in Cognosys, developing AI agents for work productivity, while Hesam Motlagh, who founded the agent startup Arkifi in January, said he closed a “sizeable” first financing round in June.

There are at least 100 serious projects working to commercialize agents, said Matt Schlicht, who writes a newsletter on AI.

“Entrepreneurs and investors are extremely excited about autonomous agents,” he said. “They’re way more excited about that than they are simply about a chatbot.”

© Thomson Reuters 2023


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI, Associated Press Partner to Explore Generative AI Use in News

The Associated Press is licensing a part its archive of news stories to OpenAI under a deal that will explore generative AI‘s use in news, the companies said on Thursday, a move that could set the precedent for similar partnerships between the industries. 

The news publisher will gain access to OpenAI’s technology and product expertise as part of the deal, whose financial details were not disclosed. 

AP also did not reveal how it would integrate OpenAI’s technology in its news operations. The publisher already uses AI for automating corporate earnings reports, recapping sporting events and transcription for certain live events.

Its trove of news stories will help provide the massive amounts of data needed to train AI systems such as ChatGPT, which have dazzled consumers and businesses with their ability to plan vacations, summarize legal documents and write computer code.

News publications have, however, been slow to adopt the tech over concerns about its tendency to generate factually incorrect information, as well as challenges in differentiating between content produced by humans and computer programs.

“Generative AI is a fast-moving space with tremendous implications for the news industry,” said Kristin Heitmann, AP’s senior vice president and chief revenue officer. 

“News organizations must have a seat at the table… so that newsrooms large and small can leverage this technology to benefit journalism.”

Some outlets are already using generative AI for their content. BuzzFeed had announced that it will use AI to power personality quizzes on its site, and the New York Times used ChatGPT to create a Valentine’s Day message-generator this year.

AP’s “feedback — along with access to their high-quality, factual text archive — will help to improve the capabilities and usefulness of OpenAI’s systems,” said Brad Lightcap, chief operating officer at OpenAI.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google Rolls Out Its AI Chatbot, Bard, in Europe and Brazil to Take on Microsoft-Backed ChatGPT

Alphabet said it is rolling out its artificial- intelligence chatbot, Bard, in Europe and Brazil on Thursday, the product’s biggest expansion since its February launch and pitting it against Microsoft-backed rival ChatGPT.

Bard and ChatGPT are human-sounding programs that use generative artificial intelligence to hold conversations with users and answer myriad prompts. The products have touched off global excitement tempered with caution.

Companies have jumped onto the AI bandwagon, investing billions with the hope of generating much more in advertising and cloud revenue. Earlier this week, billionaire Elon Musk also launched his long-teased artificial-intelligence startup xAI, whose team includes several former engineers at Google, Microsoft and OpenAI.

Google has also now added new features to Bard, which apply worldwide.

“Starting today, you can collaborate with Bard in over 40 languages, including Arabic, Chinese, German, Hindi and Spanish,” Google senior product director Jack Krawczyk said in a blog post.

“Sometimes hearing something out loud can help you approach your idea in a different way … This is especially helpful if you want to hear the correct pronunciation of a word or listen to a poem or script.”

He said users can now change the tone and style of Bard’s responses to either simple, long, short, professional or casual. They can pin or rename conversations, export code to more places and use images in prompts.

Bard’s launch in the EU had been held up by local privacy regulators. Krawczyk said Google had since then met the watchdogs to reassure them on issues relating to transparency, choice and control.

In a briefing with journalists, Amar Subramanya, engineering vice president of Bard, added that users could opt out of their data being collected.

Google has been hit by a fresh class action in the US over the alleged misuse of users’ personal information to train its artificial intelligence system.

Subramanya declined to comment on whether there were plans to develop a Bard app.

“Bard is an experiment,” he added. “We want to be bold and responsible.”

Nonetheless, novelty appeal may be waning, with recent Web user numbers showing that monthly traffic to ChatGPT’s website and unique visitors declined for the first time ever in June.

© Thomson Reuters 2023


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Elon Musk Launches AI Startup xAI With Team of Former Google, OpenAI Engineers

Musk in March registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. 

By Reuters | Updated: 12 July 2023 22:32 IST

Billionaire Elon Musk‘s xAI on Wednesday announced the formation of the artificial intelligence (AI) startup with the launch of its website, unveiling a team made up of engineers who have worked at companies from Alphabet-owned Google to Microsoft and OpenAI.

The startup will be led by Musk, the CEO of Tesla and owner of Twitter, who has said on several occasions that the development of AI should be paused and that the sector needs regulation.

“Announcing formation of @xAI to understand reality,” Musk said in a tweet on Wednesday.

The website said xAI will hold a Twitter Spaces event on July 14.

Musk in March registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. 

The firm lists Musk as the sole director and Jared Birchall, the managing director of Musk’s family office, as a secretary.

© Thomson Reuters 2023


From the Nothing Phone 2 to the Motorola Razr 40 Ultra, several new smartphones are expected to make their debut in July. We discuss all of the most exciting smartphones coming this month and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

This is a breaking news story. Details will be added soon. Please refresh the page for latest version.

Refresh

Follow Gadgets 360 on Twitter for breaking news and more.



Check out our Latest News and Follow us at Facebook

Original Source

Meta, OpenAI Sued by Comedian Over Alleged Copyright Infringement

Comedian Sarah Silverman and two authors have filed copyright infringement lawsuits against Meta Platforms and OpenAI for allegedly using their content without permission to train artificial intelligence language models. 

The proposed class action lawsuits filed by Silverman, Richard Kadrey and Christopher Golden in San Francisco federal court Friday allege Facebook parent company Meta and ChatGPT maker OpenAI used copyrighted material to train chat bots. 

Meta and OpenAI, a private company backed by Microsoft, did not immediately respond to requests for comment on Sunday. 

The lawsuits underscore the legal risks developers of chat bots face when using troves of copyrighted material to create apps that deliver realistic responses to user prompts. 

Silverman, Kadrey and Golden allege Meta and OpenAI used their books without authorization to develop their so-called large language models, which their makers pitch as powerful tools for automating tasks by replicating human conversation. 

In their lawsuit against Meta, the plaintiffs allege that leaked information about the company’s artificial intelligence business shows their work was used without permission. 

The lawsuit against OpenAI alleges that summaries of the plaintiffs’ work generated by ChatGPT indicate the bot was trained on their copyrighted content. 

“The summaries get some details wrong” but still show that ChatGPT “retains knowledge of particular works in the training dataset,” the lawsuit says. 

The lawsuits seek unspecified money damages on behalf of a nationwide class of copyright owners whose works were allegedly infringed. 

© Thomson Reuters 2023


From the Nothing Phone 2 to the Motorola Razr 40 Ultra, several new smartphones are expected to make their debut in July. We discuss all of the most exciting smartphones coming this month and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT Creator OpenAI and Meta Face Lawsuits From Sarah Silverman, Authors Over Alleged Copyright Infringement

Comedian Sarah Silverman and two authors have filed copyright infringement lawsuits against Meta Platforms and OpenAI for allegedly using their content without permission to train artificial intelligence language models.

The proposed class action lawsuits filed by Silverman, Richard Kadrey and Christopher Golden in San Francisco federal court Friday allege Facebook parent company Meta and ChatGPT maker OpenAI used copyrighted material to train chatbots.

Meta and OpenAI, a private company backed by Microsoft, did not immediately respond to requests for comment on Sunday.

The lawsuits underscore the legal risks developers of chatbots face when using troves of copyrighted material to create apps that deliver realistic responses to user prompts.

Silverman, Kadrey and Golden allege Meta and OpenAI used their books without authorisation to develop their so-called large language models, which their makers pitch as powerful tools for automating tasks by replicating human conversation.

In their lawsuit against Meta, the plaintiffs allege that leaked information about the company’s artificial intelligence business shows their work was used without permission.

The lawsuit against OpenAI alleges that summaries of the plaintiffs’ work generated by ChatGPT indicate the bot was trained on their copyrighted content.

“The summaries get some details wrong” but still show that ChatGPT “retains knowledge of particular works in the training dataset,” the lawsuit says.

The lawsuits seek unspecified money damages on behalf of a nationwide class of copyright owners whose works were allegedly infringed.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.


Samsung Galaxy M34 vs OnePlus Nord CE 3: Price in India, Specifications Compared



Check out our Latest News and Follow us at Facebook

Original Source

Microsoft-Backed AI4Bharat Said to Raise $12 Million Funding From Peak XV, Lightspeed

Researchers at AI4Bharat, a start-up backed by Microsoft, are raising $12 million (nearly Rs. 100 crore) from venture capital firms Peak XV and Lightspeed Venture, according to three people familiar with the matter.

The larger-than-usual seed funding round underscores the growing interest in generative AI, after OpenAI‘s ChatGPT dazzled users with its ability to engage in human-like conversations. Most seed rounds are usually up to $1 million (nearly Rs. 8,300 crore) to $2 million (nearly Rs. 16,500 crore).

AI4Bharat, which is also backed by the Indian government, has been developing AI models for speech recognition and translation. It unveiled in May a mobile assistant that aims to make information on government schemes accessible in multiple languages.

AI4Bharat, Peak XV and Lightspeed did not immediately respond to Reuters’ requests for comment.

Incubated at the Indian Institute of Technology in Madras and supported by a grant from Infosys co-founder Nandan Nilekani, AI4Bharat is also working with payments agency National Payments of India to develop systems for voice-based payments on feature phones.

The investment is among the first from Peak XV Partners after rebranding from Sequoia Capital India and SEA following a split with its US-based parent fund last month.

Peak XV’s other AI investments include voice assistant firm AI Rudder, computer vision firm Mad Street Den and enterprise marketing platform Insider, according to its website.

The buzz around generative AI among both consumers and businesses has helped related start-ups draw funding even as an uncertain economy saps investments for other companies.

Indian AI start-ups have raised $583 million (nearly Rs. 4,800 crore) this year, as of June, according to data from Venture Intelligence. They raised a total of $2.45 billion (nearly Rs. 20,650 crore) last year.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.


Vedanta to Take Over Foxconn Chip Joint Venture From Twin Star Technologies



Twitter Could Face Difficulties Showing Meta Stole Trade Secrets for Threads



Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version