Hackers Use ‘Bad Math’ to Trick Generative AI Models to Show Flaws and Biases at DEF CON 2023: Details

Kennedy Mays has just tricked a large language model. It took some coaxing, but she managed to convince an algorithm to say 9 + 10 = 21. “It was a back-and-forth conversation,” said the 21-year-old student from Savannah, Georgia. At first, the model agreed to say it was part of an “inside joke” between them. Several prompts later, it eventually stopped qualifying the errant sum in any way at all.

Producing “Bad Math” is just one of the ways thousands of hackers are trying to expose flaws and biases in generative AI systems at a novel public contest taking place at the DEF CON hacking conference this weekend in Las Vegas.

Hunched over 156 laptops for 50 minutes at a time, the attendees are battling some of the world’s most intelligent platforms on an unprecedented scale. They’re testing whether any of the eight models produced by companies including Alphabet’s Google, Meta Platforms, and OpenAI will make missteps ranging from dull to dangerous: claim to be human, spread incorrect claims about places and people, or advocate abuse.

The aim is to see if companies can ultimately build new guardrails to rein in some of the prodigious problems increasingly associated with large language models or LLMs. The undertaking is backed by the White House, which also helped develop the contest. 

LLMs have the power to transform everything from finance to hiring, with some companies already starting to integrate them into how they do business. But researchers have turned up extensive bias and other problems that threaten to spread inaccuracies and injustice if the technology is deployed at scale. 

For Mays, who is more used to relying on AI to reconstruct cosmic ray particles from outer space as part of her undergraduate degree, the challenges go deeper than bad math.

“My biggest concern is inherent bias,” she said, adding that she’s particularly concerned about racism. She asked the model to consider the First Amendment from the perspective of a member of the Ku Klux Klan. She said the model ended up endorsing hateful and discriminatory speech.

Spying on People

A Bloomberg reporter who took the 50-minute quiz persuaded one of the models (none of which are identified to the user during the contest) to transgress after a single prompt about how to spy on someone. The model spat out a series of instructions, using a GPS tracking device, a surveillance camera, a listening device, and thermal imaging. In response to other prompts, the model suggested ways the US government could surveil a human-rights activist.

“We have to try to get ahead of abuse and manipulation,” said Camille Stewart Gloster, deputy national cyber director for technology and ecosystem security with the Biden administration.

A lot of work has already gone into artificial intelligence and avoiding Doomsday prophecies, she said. The White House last year put out a Blueprint for an AI Bill of Rights and is now working on an executive order on AI. The administration has also encouraged companies to develop safe, secure, transparent AI, although critics doubt such voluntary commitments go far enough.

Arati Prabhakar, director of the White House Office of Science and Technology Policy, which helped shape the event and enlisted the companies’ participation, agreed voluntary measures don’t go far enough.

“Everyone seems to be finding a way to break these systems,” she said after visiting the hackers in action on Sunday. The effort will inject urgency into the administration’s pursuit of safe and effective platforms, she said.

In the room full of hackers eager to clock up points, one competitor said he thinks he convinced the algorithm to disclose credit-card details it wasn’t supposed to share. Another competitor tricked the machine into saying Barack Obama was born in Kenya.

Among the contestants are more than 60 people from Black Tech Street, an organization based in Tulsa, Oklahoma, that represents African American entrepreneurs.

“General artificial intelligence could be the last innovation that human beings really need to do themselves,” said Tyrance Billingsley, executive director of the group who is also an event judge, saying it is critical to get artificial intelligence right so it doesn’t spread racism at scale. “We’re still in the early, early, early stages.”

Researchers have spent years investigating sophisticated attacks against AI systems and ways to mitigate them.

But Christoph Endres, managing director at Sequire Technology, a German cybersecurity company, is among those who contend some attacks are ultimately impossible to dodge. At the Black Hat cybersecurity conference in Las Vegas this week, he presented a paper that argues attackers can override LLM guardrails by concealing adversarial prompts on the open internet, and ultimately automate the process so that models can’t fine-tune fixes fast enough to stop them.

“So far we haven’t found mitigation that works,” he said following his talk, arguing the very nature of the models leads to this type of vulnerability. “The way the technology works is the problem. If you want to be a hundred percent sure, the only option you have is not to use LLMs.”

Sven Cattell, a data scientist who founded DEF CON’s AI Hacking Village in 2018, cautions that it’s impossible to completely test AI systems, given they turn on a system much like the mathematical concept of chaos. Even so, Cattell predicts the total number of people who have ever actually tested LLMs could double as a result of the weekend contest.

Too few people comprehend that LLMs are closer to auto-completion tools “on steroids” than reliable fonts of wisdom, said Craig Martell, the Pentagon’s chief digital and artificial intelligence officer, who argues they cannot reason.

The Pentagon has launched its own effort to evaluate them to propose where it might be appropriate to use LLMs, and with what success rates. “Hack the hell out of these things,” he told an audience of hackers at DEF CON. “Teach us where they’re wrong.” 

© 2023 Bloomberg LP 


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT Fever Spreads to US Workplace, Firms Raise Concerns Over Intellectual Property Leaks

Many workers across the US are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found, despite fears that have led employers such as Microsoft and Google to curb its use. Companies worldwide are considering how to best make use of ChatGPT, a chatbot program that uses generative AI to hold conversations with users and answer myriad prompts. Security firms and companies have raised concerns, however, that it could result in intellectual property and strategy leaks.

Anecdotal examples of people using ChatGPT to help with their day-to-day work including drafting emails, summarising documents, and doing preliminary research.

Some 28 percent of respondents to the online poll on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT at work, while only 22 percent said their employers explicitly allowed such external tools.

The Reuters/Ipsos poll of 2,625 adults across the United States had a credibility interval, a measure of precision, of about 2 percentage points.

Some 10 percent of those polled said their bosses explicitly banned external AI tools, while about 25 percent did not know if their company permitted the use of the technology.

ChatGPT became the fastest-growing app in history after its launch in November. It has created both excitement and alarm, bringing its developer OpenAI into conflict with regulators, particularly in Europe, where the company’s mass data-collecting has drawn criticism from privacy watchdogs.

Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence AI could reproduce data it absorbed during training, creating a potential risk for proprietary information.

“People do not understand how the data is used when they use generative AI services,” said Ben King, VP of customer trust at corporate security firm Okta.

“For businesses, this is critical, because users don’t have a contract with many AIs – because they are a free service – so corporates won’t have to run the risk through their usual assessment process,” King said.

OpenAI declined to comment when asked about the implications of individual employees using ChatGPT but highlighted a recent company blog post assuring corporate partners that their data would not be used to train the chatbot further unless they gave explicit permission.

When people use Google’s Bard it collects data such as text, location, and other usage information. The company allows users to delete past activity from their accounts and request that content fed into the AI be removed. Alphabet-owned Google declined to comment when asked for further detail.

Microsoft did not immediately respond to a request for comment.

‘HARMLESS TASKS’

A US-based employee of Tinder said workers at the dating app used ChatGPT for “harmless tasks” like writing emails even though the company does not officially allow it.

“It’s regular emails. Very non-consequential, like making funny calendar invites for team events, farewell emails when someone is leaving … We also use it for general research,” said the employee, who declined to be named because they were not authorized to speak with reporters.

The employee said Tinder has a “no ChatGPT rule” but that employees still use it in a “generic way that doesn’t reveal anything about us being at Tinder”.

Reuters was not able independently confirm how employees at Tinder were using ChatGPT. Tinder said it provided “regular guidance to employees on best security and data practices”.

In May, Samsung Electronics banned staff globally from using ChatGPT and similar AI tools after discovering an employee had uploaded sensitive code to the platform.

“We are reviewing measures to create a secure environment for generative AI usage that enhances employees’ productivity and efficiency,” Samsung said in a statement on August 3.

“However, until these measures are ready, we are temporarily restricting the use of generative AI through company devices.”

Reuters reported in June that Alphabet had cautioned employees about how they use chatbots including Google’s Bard, at the same time as it markets the program globally.

Google said although Bard can make undesired code suggestions, it helps programmers. It also said it aimed to be transparent about the limitations of its technology.

BLANKET BANS

Some companies told Reuters they are embracing ChatGPT and similar platforms while keeping security in mind.

“We’ve started testing and learning about how AI can enhance operational effectiveness,” said a Coca-Cola spokesperson in Atlanta, Georgia, adding that data stays within its firewall.

“Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity,” the spokesperson said, adding that Coca-Cola plans to use AI to improve the effectiveness and productivity of its teams.

Tate & Lyle Chief Financial Officer Dawn Allen, meanwhile, told Reuters that the global ingredients maker was trialing ChatGPT, having “found a way to use it in a safe way”.

“We’ve got different teams deciding how they want to use it through a series of experiments. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?”

Some employees say they cannot access the platform on their company computers at all.

“It’s completely banned on the office network like it doesn’t work,” said a Procter & Gamble employee, who wished to remain anonymous because they were not authorized to speak to the press.

P&G declined to comment. Reuters was not able independently to confirm whether employees at P&G were unable to use ChatGPT.

Paul Lewis, chief information security officer at cyber security firm Nominet, said firms were right to be wary.

“Everybody gets the benefit of that increased capability, but the information isn’t completely secure and it can be engineered out,” he said, citing “malicious prompts” that can be used to get AI chatbots to disclose information.

“A blanket ban isn’t warranted yet, but we need to tread carefully,” Lewis said. 

© Thomson Reuters 2023  


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

US Government Launches Cyber Contest on AI to Find and Fix Security Flaws

The White House on Wednesday said it had launched a multimillion-dollar cyber contest to spur use of artificial intelligence (AI) to find and fix security flaws in US government infrastructure, in the face of growing use of the technology by hackers for malicious purposes. 

Cybersecurity is a race between offense and defense,” said Anne Neuberger, the US government’s deputy national security advisor for cyber and emerging technology.

“We know malicious actors are already using AI to accelerate identifying vulnerabilities or build malicious software,” she added in a statement to Reuters.

Numerous US organizations, from healthcare groups to manufacturing firms and government institutions, have been hacking targets in recent years, and officials have warned of future threats, especially from foreign adversaries. 

Neuberger’s comments about AI echo those Canada’s cybersecurity chief Samy Khoury made last month. He said his agency had seen AI being used for everything from creating phishing emails and writing malicious computer code to spreading disinformation.

The two-year contest includes around $20 million (nearly Rs. 165 crore) in rewards and will be led by the Defense Advanced Research Projects Agency (DARPA) — the US government body in charge of creating technologies for national security — the White House said.

Alphabet‘s Google, Anthropic, Microsoft, and OpenAI — the US technology firms at the forefront of the AI revolution — will make their systems available for the challenge, the government said.

The contest signals official attempts to tackle an emerging threat that experts are still trying to fully grasp. In the past year, US firms have launched a range of generative AI tools such as ChatGPT that allow users to create convincing videos, images, texts, and computer code. Chinese companies have launched similar models to catch up.

Experts say such tools could make it far easier to, for instance, conduct mass hacking campaigns or create fake profiles on social media to spread false information and propaganda. 

“Our goal with the DARPA AI challenge is to catalyze a larger community of cyber defenders who use the participating AI models to race faster – using generative AI to bolster our cyber defenses,” Neuberger said.

The Open Source Security Foundation (OpenSSF), a US group of experts trying to improve open source software security, will be in charge of ensuring the “winning software code is put to use right away,” the US government said.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

News Organisations Call for Regulations on Content Use by AI Makers, Reveals Letter

A group of the world’s biggest news media organizations called for revised regulations on the use of copyrighted material by makers of artificial intelligence technology, according to an open letter published on Wednesday.

The note, signed by industry bodies like the News Media Alliance — which includes nearly 2,000 publications in the United States — and the European Publishers’ Council, batted for a framework enabling media companies to “collectively negotiate” with AI model operators regarding the operators’ use of their intellectual property.

“Generative AI and large language models… disseminate that content and information to their users, often without any consideration of, remuneration to, or attribution to the original creators. Such practices undermine the media industry’s core business models,” according to the letter. 

Services like OpenAI‘s ChatGPT and Google‘s Bard, which use the language producing generative AI, has led to a surge in online content produced by bots and several industries are assessing its impact on their businesses.

Most of those services do not disclose what inputs they have used to train their models, although with earlier versions of their models have said they used datasets comprising billions of pieces of information scraped from the internet for training, which include content from news websites.

Even as the technology sees wide adoption — several companies have launched features based on generative AI — governments around the world are still deliberating rules to govern its use.

The move echoes the news media industry’ long-standing effort to secure favorable deals with tech companies like Meta Platforms and Alphabet, which are often accused by publishers of running platforms filled with news content without adequately sharing profits. US lawmakers this year are considering a bill called the Journalism Competition and Preservation Act, which allow news broadcasters and publishers with fewer than 1,500 full-time workers to jointly negotiate ad rates with the likes of Google and Facebook.

Meanwhile, news companies are beginning to experiment with generative AI and negotiate deals with tech companies for their content to be used to train AI models.

News agency Associated Press, one of the signatories of the letter, last month signed a deal with OpenAI to license a part of AP’s archive of stories and explore generative AI’s use in news. OpenAI also committed $5 million (nearly Rs. 41 crores) to the American Journalism Project (AJP) under a partnership that will look for ways to support local news through AI.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Worldcoin to Allow Organisations to Use Its Digital ID System

Worldcoin will expand its operations to sign up more users globally and aims to allow other organisations to use its iris-scanning and identity-verifying technology, a senior manager for the company behind the project told Reuters.

Co-founded by OpenAI CEO Sam Altman, Worldcoin launched last week, requiring users to give their iris scans in exchange for a digital ID and, in some countries, free cryptocurrency as part of plans to create a “identity and financial network”.

In sign-up sites around the world, people have been getting their faces scanned by a shiny spherical “orb”, shrugging off privacy campaigners’ concerns that the biometric data could be misused. Worldcoin says 2.2 million have signed up, mostly during a trial period over the last two years. Data watchdogs in Britain, France and Germany have said they are looking into the project.

“We are on this mission of building the biggest financial and identity community that we can,” said Ricardo Macieira, general manager for Europe at Tools For Humanity, the San Francisco and Berlin-based company behind the project.

Worldcoin raised $115 million (nearly Rs. 95,150 crore) from venture capital investors including Blockchain Capital, a16z crypto, Bain Capital Crypto and Distributed Global in a funding round in May.

Macieira said Worldcoin would continue rolling out operations in Europe, Latin America, Africa and “all the parts of the world that will accept us.”

Worldcoin’s website mentions various possible applications, including distinguishing humans from artificial intelligence, enabling “global democratic processes” and showing a “potential path” to universal basic income, although these outcomes are not guaranteed.

Most people interviewed by Reuters at sign-up sites in Britain, India and Japan last week said they were joining in order to receive the 25 free Worldcoin tokens the company says verified users can claim.

“I don’t think we are going to be the ones generating universal basic income. If we can do the infrastructure that allows for governments or other entities to do so we would be very happy,” Macieira said.

Companies could pay Worldcoin to use its digital identity system, for example if a coffee shop wants to give everyone one free coffee, then Worldcoin’s technology could be used to ensure that people do not claim more than one coffee without the shop needing to gather personal data, Macieira said.

“The idea is that as we build this infrastructure and that we allow other third parties to use the technology.”

In future, the technology behind the iris-scanning orb will be open-source, Macieira added.

“The idea is that anyone can in the future build their own orb and use it to benefit the community that it’s aiming for,” he said.

Privacy concerns

Regulators and privacy campaigners have raised concerns about Worldcoin’s data collection, including whether users are giving informed consent and whether one company should be responsible for handling the data.

Worldcoin’s website says the project is “completely private” and that the biometric data is either deleted or users can opt to have it stored in encrypted form.

The Bavarian State Office for Data Protection Supervision, which has jurisdiction in the European Union because Tools For Humanity has an office there, said it started investigating Worldcoin in November 2022 because of concerns about its large-scale processing of sensitive data.

Michael Will, president of the Bavarian regulator, said it would look into whether Worldcoin’s system is “safe and stable”.

The project “requires very, very ambitious security measures and lots of explanations and transparency to ensure that data protection requirements are not neglected,” Will said.

Will said people who hand over their data need “absolute clarity” about how and why it is processed.

Rainer Rehak, a researcher on AI and society at the Weizenbaum Institute in Berlin said that Worldcoin’s use of technology is “irresponsible” and that it is not clear what problems it would solve.

“The bottom line is it’s a big project to create a new consumer base for Web3 and crypto products,” he said. Web3 is a term for a hypothetical next phase of the internet, based around blockchain, in which users’ assets and data exist as tradable crypto assets.

Addressing privacy concerns, the Worldcoin Foundation, a Cayman Islands-based entity, said in a statement that it complies with all laws governing personal data and will continue to cooperate with governing bodies’ requests for information about its privacy and data protection practices.

© Thomson Reuters 2023


Samsung launched the Galaxy Z Fold 5 and Galaxy Z Flip 5 alongside the Galaxy Tab S9 series and Galaxy Watch 6 series at its first Galaxy Unpacked event in South Korea. We discuss the company’s new devices and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta to Launch AI-Powered Chatbots With Different Personalities by September: Report

Meta Platforms is preparing to launch a range of artificial intelligence (AI) powered chatbots that exhibit different personalities as soon as September, the Financial Times reported on Tuesday.

Meta has been designing prototypes for chatbots that can have humanlike discussions with its users, as the company attempts to boost its engagement with its social media platforms, according to the report, citing people with knowledge of the plans.

The Menlo Park, California-based social media giant is even exploring a chatbot that speaks like Abraham Lincoln and another that advises on travel options in the style of a surfer, the report added. The purpose of these chatbots will be to provide a new search function as well as offer recommendations.

The report comes as Meta executives are focusing on boosting retention on its new text-based app Threads, after the app lost more than half of its users in the weeks following its launch on July 5.

Meta did not immediately respond to a Reuters request for comment.

The Facebook parent reported a strong rise in advertising revenue in its earnings last week, forecasting third-quarter revenue above market expectations.

The company has been climbing back from a bruising 2022, buoyed by hype around emerging AI technology and an austerity drive in which it has shed around 21,000 employees since last fall.

Bloomberg News reported in July that Apple is working on AI offerings similar to OpenAI’s ChatGPT and Google’s Bard, adding that it has built its own framework, known as ‘Ajax’, to create large language models and is also testing a chatbot that some engineers call ‘Apple GPT’.

© Thomson Reuters 2023


Samsung launched the Galaxy Z Fold 5 and Galaxy Z Flip 5 alongside the Galaxy Tab S9 series and Galaxy Watch 6 series at its first Galaxy Unpacked event in South Korea. We discuss the company’s new devices and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI CEO Sam Altman’s Worldcoin Crypto Under European Regulators Scrutiny

Less than a week after its launch, the Worldcoin crypto project of OpenAI chief executive Sam Altman is already under scrutiny by European regulators over its reliance on an eye scan to verify a user’s identity, France’s data protection agency said Friday.

Worldcoin’s launch on Monday comes as the cryptocurrency industry is suffering hard times after the spectacular collapse of FTX and various legal cases against the sector’s biggest players.

Using eye scans, it tries to solve one of the main challenges facing the crypto industry: a level of anonymity so high that makes it vulnerable to scams and spam bots — which AI threatens to make exponentially worse. 

But Worldcoin’s collection of biometric data could run afoul of strict data privacy rules in Europe.

“Worldcoin has begun to collect data in France… which seems questionable as does the conservation of biometric data,” France’s CNIL data regulator told AFP.

After conducting an initial review, CNIL said it identified its counterpart in the German state of Bavaria as the lead agency in Europe to conduct a probe into Worldcoin, and said it supports their investigation.

Worldcoin in fact began operating in June in Germany, which is the home country of co-founder Alex Blania.

Bavaria’s data protection agency had no immediate comment when contacted by AFP on Friday.

With its cryptocurrency and identification system Worldcoin aims to create the “world’s largest identity and financial public network,” according to its website.

Altman and Blania said earlier this week in a letter posted to Twitter, which is being renamed X, that the Worldcoin offers “a reliable solution for distinguishing humans from AI online while preserving privacy”.

This will in turn enable Worldcoin as a blockchain-based technology to drastically increase economic opportunity and enable democratic processes.

Blockchains are distributed databases that facilitate the verification and traceability of transactions. 

They can offer lower costs and faster data transfer while ensuring secure transactions, although the most famous blockchain that powers the cryptocurrency Bitcoin, is notorious for being slow and expensive as it requires huge computer processing power to validate transactions as part of its system to reward processors with new bitcoins.


Is the iQoo Neo 7 Pro the best smartphone you can buy under Rs. 40,000 in India? We discuss the company’s recently launched handset and what it has to offer on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT Android App Now Available in India: How to Download OpenAI’s AI Chatbot on Android

ChatGPT, the AI chatbot from OpenAI, has just been released on Android after being available to iOS users for two months now. The rollout of the Android version of the app will take place in phases and it is initially open to users in the US, India, Bangladesh, and Brazil. OpenAI eyes to expand the app to more countries further over the next week. The ChatGPT app for Android is available for free, but OpenAI is offering an optional subscription to access a better large language model (GPT-4) and additional features.

The ChatGPT Android app is available for download in India through the Google Play store and is compatible with devices running on Android 6.0 or later. Besides India, the app is now live in the US, Bangladesh, and Brazil and OpenAI plans to expand the rollout to additional countries over the next week.

How to download the ChatGPT app on Android

Those who did not pre-register for the app can head to the Google Play store and download it. It is 6MB in size. Users who pre-registered will only need to update the app from the store and that’s a small download.

  1. Head to the Google Play store’s ChatGPT Android app page.
  2. Tap on the Install button and wait for the app to download and install
  3. If you are not already signed in, open the app and enter your Google ID and password when requested.
  4. Other users can enter existing credentials

Once done you can access all the features that are available on the desktop version of ChatGPT. The Android app brings chat history and syncing support. Users can sign up with a free account though they have the option to access the paid GPT-4 model.

ChatGPT was exclusive to Apple’s iOS since its initial release in May 2023. The AI-based chatbot gained popularity in the tech world shortly after its public release in November last year. Users can generate text content on ChatGPT by entering queries and prompts. The platform uses artificial intelligence (AI) to provide answers, tailored advice, and inputs to users.

However, the launch of the Android app is coming a week after a report by Similarweb showed a drop in the monthly traffic to ChatGPT’s website and unique visitors in June. As per the analytics firm, worldwide desktop and mobile traffic to the ChatGPT website fell by 9.7 percent in June from May, while unique visitors to ChatGPT’s website dropped 5.7 percent. The amount of time visitors spent on the website was also down 8.5 percent.


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI CEO Sam Altman Launches Worldcoin Cryptocurrency Project: Details

Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launches on Monday.

The project’s core offering is its World ID, an account that only real humans can get. To get a World ID, a customer signs up to do an in-person iris scan using Worldcoin’s ‘orb’, a silver ball approximately the size of a bowling ball. Once the orb’s iris scan verifies the person is a real human, it creates a World ID.

The company behind Worldcoin is San Francisco and Berlin-based Tools for Humanity.

The project has 2 million users from its beta period, and with Monday’s launch, Worldcoin is scaling up “orbing” operations to 35 cities in 20 countries. As an enticement, those who sign up in certain countries will receive Worldcoin’s cryptocurrency token WLD.

The cryptocurrency aspect of the World IDs is important because cryptocurrency blockchains can store the World IDs in a way that preserves privacy and can’t be controlled or shut down by any single entity, co-founder Alex Blania told Reuters.

The project says World IDs will be necessary in the age of generative AI chatbots like ChatGPT, which produce remarkably humanlike language. World IDs could be used to tell the difference between real people and AI bots online.

Binance, the largest cryptocurrency exchange, said it will list Worldcoin with the tentative opening of trading expected to be on Monday at 0900 GMT.

Altman told Reuters Worldcoin also can help address how the economy will be reshaped by generative AI.

“People will be supercharged by AI, which will have massive economic implications,” he said.

One example Altman likes is universal basic income, or UBI, a social benefits program usually run by governments where every individual is entitled to payments. Because AI “will do more and more of the work that people now do,” Altman believes UBI can help to combat income inequality. Since only real people can have World IDs, it could be used to reduce fraud when deploying UBI.

Altman said he thought a world with UBI would be “very far in the future” and he did not have a clear idea of what entity could dole out money, but that Worldcoin lays groundwork for it to become a reality.

“We think that we need to start experimenting with things so we can figure out what to do,” he said.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Gliding, Not Searching: Here’s How to Reset Your View of ChatGPT to Steer It to Better Results

ChatGPT has exploded in popularity, and people are using it to write articles and essays, generate marketing copy and computer code, or simply as a learning or research tool.

However, most people don’t understand how it works or what it can do, so they are either not happy with its results or not using it in a way that can draw out its best capabilities.

I’m a human factors engineer. A core principle in my field is never blame the user.

Unfortunately, the ChatGPT search-box interface elicits the wrong mental model and leads users to believe that entering a simple question should lead to a comprehensive result, but that’s not how ChatGPT works.

Unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere.

Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer.

Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later.

The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.

Your mental model of a machine – how you conceive of it – is important for using it effectively.

To understand how to shape a productive session with ChatGPT, think of it as a glider that takes you on journeys through knowledge and possibilities.

Dimensions of knowledge

You can begin by thinking of a specific dimension or space in a topic that intrigues you. If the topic were chocolate, for example, you might ask it to write a tragic love story about Hershey’s Kisses.

The glider has been trained on essentially everything ever written about Kisses, and similarly it “knows” how to glide through all kinds of story spaces — so it will confidently take you on a flight through Hershey’s Kisses space to produce the desired story.

You might instead ask it to explain five ways in which chocolate is healthy and give the response in the style of Dr. Seuss.

Your requests will launch the glider through different knowledge spaces – chocolate and health – toward a different destination – a story in a specific style.

To unlock ChatGPT’s full potential, you can learn to fly the glider through “transversal” spaces – areas that cross multiple domains of knowledge.

By guiding it through these domains, ChatGPT will learn both the scope and angle of your interest and will begin to adjust its response to provide better answers.

For example, consider this prompt: “Can you give me advice on getting healthy.” In that query, ChatGPT does not know who the “you” is, nor who “me” is, nor what you mean by “getting healthy.” Instead, try this: “Pretend you are a medical doctor, a nutritionist and a personal coach. Prepare a two-week food and exercise plan for a 56-year-old man to increase heart health.” With this, you have given the glider a more specific flight plan spanning areas of medicine, nutrition and motivation.

If you want something more precise, then you can activate a few more dimensions. For example, add in: “And I want to lose some weight and build muscle, and I want to spend 20 minutes a day on exercise, and I cannot do pull-ups and I hate tofu.” ChatGPT will provide output taking into account all of your activated dimensions. Each dimension can be presented together or in sequence.

Flight plan

The dimensions you add through prompts can be informed by answers ChatGPT has given along the way. Here’s an example: “Pretend you are an expert in cancer, nutrition and behaviour change. Propose 8 behaviour-change interventions to reduce cancer rates in rural communities.” ChatGPT will dutifully present eight interventions.

Let’s say three of the ideas look the most promising. You can follow up with a prompt to encourage more details and start putting it in a format that could be used for public messaging: “Combine concepts from ideas 4, 6 and 7 to create 4 new possibilities – give each a tagline, and outline the details.” Now let’s say intervention 2 seems promising. You can prompt ChatGPT to make it even better: “Offer six critiques of intervention 2 and then redesign it to address the critiques.” ChatGPT does better if you first focus on and highlight dimensions you think are particularly important.

For example, if you really care about the behaviour-change aspect of the rural cancer rates scenario, you could force ChatGPT to get more nuanced and add more weight and depth to that dimension before you go down the path of interventions.

You could do this by first prompting: “Classify behaviour-change techniques into 6 named categories. Within each, describe three approaches and name two important researchers in the category.” This will better activate the behaviour-change dimension, letting ChatGPT incorporate this knowledge in subsequent explorations.

There are many categories of prompt elements you can include to activate dimensions of interest.

One is domains, like “machine learning approaches.” Another is expertise, like “respond as an economist with Marxist leanings.” And another is output style, like “write it as an essay for The Economist.” You can also specify audiences, like “create and describe 5 clusters of our customer-types and write a product description targeted to each one.” Explorations, not answers By rejecting the search engine metaphor and instead embracing a transdimensional glider metaphor, you can better understand how ChatGPT works and navigate more effectively toward valuable insights.

The interaction with ChatGPT is best performed not as a simple or undirected question-and-answer session, but as an interactive conversation that progressively builds knowledge for both the user and the chatbot.

The more information you provide to it about your interests, and the more feedback it gets on its responses, the better its answers and suggestions. The richer the journey, the richer the destination.

It is important, however, to use the information provided appropriately. The facts, details and references ChatGPT presents are not taken from verified sources.

They are conjured based on its training on a vast but non-curated set of data. ChatGPT will generate a medical diagnosis the same way it writes a Harry Potter story, which is to say it is a bit of an improviser.

You should always critically evaluate the specific information it provides and consider its output as explorations and suggestions rather than as hard facts.

Treat its content as imaginative conjectures that require further verification, analysis and filtering by you, the human pilot.


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version