Prime Minister Narendra Modi Meets Nvidia CEO, Discuss India’s AI Potential

Prime Minister Narendra Modi on Monday met CEO of American software firm Nvidia Jensen Huang and they talked at length about the “rich potential” India offers in the world of Artificial Intelligence.

In a post on X, Modi said, “Had an excellent meeting with Mr. Jensen Huang, the CEO of @nvidia. We talked at length about the rich potential India offers in the world of AI.

“Mr. Jensen Huang was appreciative of the strides India has made in this sector and was equally upbeat about the talented youth of India,” the prime minister said.

Nvidia Corporation is an American multinational technology company that was founded on April 5, 1993, by Jensen Huang, Chris Malachowsky, and Curtis Priem, with a vision to bring 3D graphics to the gaming and multimedia markets.

Jio Platforms is keen to lead efforts in developing India-specific AI models and AI-powered solutions across domains, delivering the benefit of this new-age technology to Indian citizens, businesses and government, RIL Chairman Mukesh Ambani said recently, promising “AI to everyone, everywhere.”

Terming Artificial Intelligence (AI) as the most exciting frontier of growth for Jio, Ambani outlined ambitious plans on this front at the 46th AGM of Reliance Industries.

Ambani pledged the company’s commitment to create up to 2,000 MW of AI-ready computing capacity, across both cloud and edge locations, while adopting sustainable practices and a greener future.

A global AI revolution is reshaping the world and intelligent applications will redefine and revolutionise industries, economies, and even daily life, sooner than expected, the RIL top honcho said.

To stay globally competitive, India must harness AI for innovation, growth, and national prosperity, he asserted.


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Reliance Chairman Mukesh Ambani Push Artificial Intelligence Plans: Jio Promises AI to Everyone, Everywhere in India

Jio Platforms is keen to lead efforts in developing India-specific AI models and AI-powered solutions across domains, delivering the benefit of this new-age technology to Indian citizens, businesses and government, RIL Chairman Mukesh Ambani said on Monday promising “AI to everyone, everywhere.”

Terming Artificial Intelligence (AI) as the most exciting frontier of growth for Jio, Ambani outlined ambitious plans on this front at the 46th AGM of Reliance Industries.

Ambani pledged the company’s commitment to create up to 2,000 MW of AI-ready computing capacity, across both cloud and edge locations, while adopting sustainable practices and a greener future.

A global AI revolution is reshaping the world and intelligent applications will redefine and revolutionise industries, economies, and even daily life, sooner than expected, the RIL top honcho said.

To stay globally competitive, India must harness AI for innovation, growth, and national prosperity, he asserted.

“Here is my promise to our countrymen. Seven years ago, Jio promised broadband connectivity to everyone, everywhere. We have delivered. Today Jio promises AI to everyone, everywhere. And we shall deliver,” he vowed.

Within the RIL group, talent pool and capabilities are being augmented to swiftly assimilate the latest global innovations in AI, especially the recent advances in generative AI.

“Looking ahead, Jio Platforms wants to lead the effort in developing India-specific AI models and AI-powered solutions across domains, thereby delivering the benefit of AI to Indian citizens, businesses and government alike,” he said.

India has the scale, the data, and the talent, Ambani noted.

“But we also need digital infrastructure in India that can handle AI’s immense computational demands. As this sector expands, we stand committed to creating up to 2,000 MW of AI-ready computing capacity, across both cloud and edge locations…Over the next five years, we plan to shift most of our energy footprint in connectivity and digital services to green energy, which is not just eco-friendly but also lower cost,” he said.


Is the iQoo Neo 7 Pro the best smartphone you can buy under Rs. 40,000 in India? We discuss the company’s recently launched handset and what it has to offer on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Windows 11 Apps Like Photos, Paint, and Snipping Tool Could Soon Offer AI-Backed Features: Report

Windows 11 could soon gain support for artificial intelligence (AI) features as Microsoft is reportedly working on adding support for automation and AI to its popular desktop operating system and related products. The Redmond-based software company’s Windows 11 operating system comes with Photos and Paint apps for image viewing and basic manipulation, respectively. According to a report, these apps could soon gain support for features that include optical character recognition (OCR) and creating images on the fly using generative AI.

According to a Windows Central report citing unnamed sources, Microsoft is working on adding AI features to three applications — Photos, Camera, Paint, and the Snipping Tool that is used to capture screenshots. These applications are available on Windows 11 out-of-the-box and the new functionality could be added via app updates on the Microsoft Store or via the company’s regular feature updates to the operating system.

The firm is considering the possibility of adding support for generative AI to the Microsoft Paint app, which would allow the basic image manipulation tool to generate images using prompts on the fly and edit them in the app. Microsoft previously introduced support for generating images with user-provided prompts on its revamped AI-powered Bing app, along with OpenAI’s DALL-E text to image model.

Meanwhile, the Photos app, which is the default app used to open images on Windows 11, as well as the Snipping Tool, could both get a feature that is currently available on smartphones — OCR support. This is a feature that can come in handy for millions of users and will eliminate the need to use online services that offer the same functionality. The report also contains an image of an internal build of the Camera app with OCR support to detect text from a photo of a page.

There’s no word on if or when Microsoft will eventually roll out these features to Windows users, while some of these features could require dedicated hardware for neural computation to work reliably. Windows 11 is expected to get a major software upgrade next year, which is reportedly when many AI features could make their way to the operating system. However, some of these features are still “experimental” which means it could be a while before they roll out to users, according to the report. 


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT and Other Language AIs Are Nothing Without Humans — a Sociologist Explains How Countless Hidden People Make the Magic

The media frenzy surrounding ChatGPT and other large language model artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. 

All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.

But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.

ChatGPT can’t learn, improve or even stay up to date without humans giving it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering its hardware. To understand why, you first have to understand how ChatGPT and similar models work, and the role humans play in making them work.

How ChatGPT works

Large language models like ChatGPT work, broadly, by predicting what characters, words and sentences should follow one another in sequence based on training data sets. In the case of ChatGPT, the training data set contains immense quantities of public text scraped from the internet.

Imagine I trained a language model on the following set of sentences: Bears are large, furry animals. Bears have claws. Bears are secretly robots. Bears have noses. Bears are secretly robots. Bears sometimes eat fish. Bears are secretly robots.

The model would be more inclined to tell me that bears are secretly robots than anything else, because that sequence of words appears most frequently in its training data set. This is obviously a problem for models trained on fallible and inconsistent data sets – which is all of them, even academic literature.

People write lots of different things about quantum physics, Joe Biden, healthy eating or the Jan. 6 insurrection, some more valid than others. How is the model supposed to know what to say about something, when people say lots of different things? The need for feedback This is where feedback comes in. If you use ChatGPT, you’ll notice that you have the option to rate responses as good or bad. If you rate them as bad, you’ll be asked to provide an example of what a good answer would contain. ChatGPT and other large language models learn what answers, what predicted sequences of text, are good and bad through feedback from users, the development team and contractors hired to label the output.

ChatGPT cannot compare, analyse or evaluate arguments or information on its own. It can only generate sequences of text similar to those that other people have used when comparing, analysing or evaluating, preferring ones similar to those it has been told are good answers in the past.

Thus, when the model gives you a good answer, it’s drawing on a large amount of human labour that’s already gone into telling it what is and isn’t a good answer. There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue improving or to expand its content coverage.

A recent investigation published by journalists in Time magazine revealed that hundreds of Kenyan workers spent thousands of hours reading and labeling racist, sexist and disturbing writing, including graphic descriptions of sexual violence, from the darkest depths of the internet to teach ChatGPT not to copy such content.

They were paid no more than USD2 an hour, and many understandably reported experiencing psychological distress due to this work.

What ChatGPT can’t do

The importance of feedback can be seen directly in ChatGPT’s tendency to “hallucinate”; that is, confidently provide inaccurate answers. ChatGPT can’t give good answers on a topic without training, even if good information about that topic is widely available on the internet.

You can try this out yourself by asking ChatGPT about more and less obscure things. I’ve found it particularly effective to ask ChatGPT to summarise the plots of different fictional works because, it seems, the model has been more rigorously trained on nonfiction than fiction.

In my own testing, ChatGPT summarised the plot of JRR. Tolkien’s The Lord of the Rings, a very famous novel, with only a few mistakes. But its summaries of Gilbert and Sullivan’s The Pirates of Penzance and of Ursula K. Le Guin’s The Left Hand of Darkness – both slightly more niche but far from obscure – come close to playing Mad Libs with the character and place names. It doesn’t matter how good these works’ respective Wikipedia pages are. The model needs feedback, not just content.

Because large language models don’t actually understand or evaluate information, they depend on humans to do it for them. They are parasitic on human knowledge and labor. When new sources are added into their training data sets, they need new training on whether and how to build sentences based on those sources.

They can’t evaluate whether news reports are accurate or not. They can’t assess arguments or weigh trade-offs. They can’t even read an encyclopedia page and only make statements consistent with it, or accurately summarize the plot of a movie. They rely on human beings to do all these things for them.

Then they paraphrase and remix what humans have said, and rely on yet more human beings to tell them whether they’ve paraphrased and remixed well. If the common wisdom on some topic changes – for example, whether salt is bad for your heart or whether early breast cancer screenings are useful – they will need to be extensively retrained to incorporate the new consensus.

Many people behind the curtain In short, far from being the harbingers of totally independent AI, large language models illustrate the total dependence of many AI systems, not only on their designers and maintainers but on their users. So if ChatGPT gives you a good or useful answer about something, remember to thank the thousands or millions of hidden people who wrote the words it crunched and who taught it what were good and bad answers.

Far from being an autonomous superintelligence, ChatGPT is, like all technologies, nothing without us.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Disney Creates Taskforce to Explore AI Applications Across Verticals, Bring Cost Cutting Measures

Walt Disney has created a task force to study artificial intelligence and how it can be applied across the entertainment conglomerate, even as Hollywood writers and actors battle to limit the industry’s exploitation of the technology.

Launched earlier this year, before the Hollywood writers’ strike, the group is looking to develop AI applications in-house as well as form partnerships with startups, three sources told Reuters.

As evidence of its interest, Disney has 11 current job openings seeking candidates with expertise in artificial intelligence or machine learning.

The positions touch virtually every corner of the company – from Walt Disney Studios to the company’s theme parks and engineering group, Walt Disney Imagineering, to Disney-branded television and the advertising team, which is looking to build a “next-generation” AI-powered ad system, according to the job ad descriptions.

A Disney spokesperson declined to comment.

One of the sources, an internal advocate who spoke on condition of anonymity because of the sensitivity of the subject, said legacy media companies like Disney must either figure out AI or risk obsolescence.

This supporter sees AI as one tool to help control the soaring costs of movie and television production, which can swell to $300 million (roughly Rs. 2,484 crore) for a major film release like “Indiana Jones and the Dial of Destiny” or “The Little Mermaid.” Such budgets require equally massive box office returns simply to break even. Cost savings would be realized over time, the person said.

For its parks business, AI could enhance customer support or create novel interactions, said the second source as well as a former Disney Imagineer, who declined to be identified because he was not authorized to speak publicly.

The former Imagineer pointed to Project Kiwi, which used machine-learning techniques to create Baby Groot, a small, free-roaming robot that mimics the “Guardians of the Galaxy” character’s movements and personality.

Machine learning, the branch of AI that gives computers the ability to learn without being programmed, informs its vision systems, so it is able to recognize and navigate objects in its environment. Someday, Baby Groot will interact with guests, the former Imagineer said.

AI has become a powder keg in Hollywood, where writers and actors view it as an existential threat to jobs. It is a central issue in contract negotiations with the Screen Actors Guild and the Writers Guild of America, both of which are on strike.

Disney has been careful about how it discusses AI in public. The visual effects supervisors who worked on the latest “Indiana Jones” movie emphasized the painstaking labors of more than 100 artists who spent three years seeking to “de-age” Harrison Ford so that the octogenarian actor could appear as his younger self in the early minutes of the film.

‘STEAMBOAT WILLIE’

Disney has invested in technological innovation since its earliest days. In 1928 it debuted “Steamboat Willie”, the first cartoon to feature a synchronized soundtrack. It now holds more than 4,000 patents with applications in theme parks, films, and merchandise, according to a search of the US Patent and Trademark Office records.

Bob Iger, now in his second stint as Disney’s chief executive, made the embrace of technology one of his three priorities when he was first named CEO in 2005.

Three years later, the company announced a major research and development initiative with top technology universities around the world, funding labs at the Swiss Federal Institute of Technology in Zurich and Carnegie Mellon University in Pittsburgh, Pennsylvania. It closed the Pittsburgh lab in 2018.

Disney’s US research group has developed a mixed-reality technology called “Magic Bench” that allows people to share a space with a virtual character on screen, without the need for special glasses.

In Switzerland, Disney Research has been exploring AI, machine learning, and visual computing, according to its website. It has spent the last decade creating “digital humans” that it describes as “indistinguishable” from their corporeal counterparts, or fantasy characters “puppeteered” by actors.

This technology is used to augment digital effects, not replace human actors, according to a source familiar with the matter.

Its Medusa performance capture system has been used to reconstruct actors’ faces without using traditional motion-capture techniques, and this technology has been used in more than 40 films, including Marvel Entertainment’s “Black Panther: Wakanda Forever.”

“AI research at Disney goes back a very long time and revolves around all the things you see being discussed today: Can we have something that helps us make movies, games, or conversational robots inside theme parks that people can talk to?” said one executive who has worked with Disney.

Hao Li, CEO and co-founder of Pinscreen, a Los Angeles-based company that creates AI-driven virtual avatars, said he worked on multiple research papers with Disney’s lab while studying in Zurich from 2006 to 2010.

“They basically do research on anything based on performance capture of humans, creating digital faces,” said Li, a former research lead at Disney-owned Industrial Light & Magic. “Some of these techniques will be adopted by Disney entities.”

Disney Imagineering last year unveiled the company’s first initiative in an AI-driven character experience, the D3-09 cabin droid in the Star Wars Galactic Starcruiser hotel, which answered questions on a video screen and learned and changed based on conversations with guests.

“Not only is she a great character to interact with and always available in your cabin, which I think is very cool, behind the scenes, but it’s also a very cool piece of technology,” Imagineering executive Scott Trowbridge said at the time. 

© Thomson Reuters 2023 


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta to Launch AI-Powered Chatbots With Different Personalities by September: Report

Meta Platforms is preparing to launch a range of artificial intelligence (AI) powered chatbots that exhibit different personalities as soon as September, the Financial Times reported on Tuesday.

Meta has been designing prototypes for chatbots that can have humanlike discussions with its users, as the company attempts to boost its engagement with its social media platforms, according to the report, citing people with knowledge of the plans.

The Menlo Park, California-based social media giant is even exploring a chatbot that speaks like Abraham Lincoln and another that advises on travel options in the style of a surfer, the report added. The purpose of these chatbots will be to provide a new search function as well as offer recommendations.

The report comes as Meta executives are focusing on boosting retention on its new text-based app Threads, after the app lost more than half of its users in the weeks following its launch on July 5.

Meta did not immediately respond to a Reuters request for comment.

The Facebook parent reported a strong rise in advertising revenue in its earnings last week, forecasting third-quarter revenue above market expectations.

The company has been climbing back from a bruising 2022, buoyed by hype around emerging AI technology and an austerity drive in which it has shed around 21,000 employees since last fall.

Bloomberg News reported in July that Apple is working on AI offerings similar to OpenAI’s ChatGPT and Google’s Bard, adding that it has built its own framework, known as ‘Ajax’, to create large language models and is also testing a chatbot that some engineers call ‘Apple GPT’.

© Thomson Reuters 2023


Samsung launched the Galaxy Z Fold 5 and Galaxy Z Flip 5 alongside the Galaxy Tab S9 series and Galaxy Watch 6 series at its first Galaxy Unpacked event in South Korea. We discuss the company’s new devices and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

US President Said to to Sign New Order to Limit US Tech Investments in China by Mid-August

US President Joe Biden is planning to sign an executive order to limit critical US technology investments in China by mid-August, according to people familiar with the internal deliberations.

The order focuses on semiconductors, artificial intelligence and quantum computing. It won’t affect any existing investments and will only prohibit certain transactions. Other deals will have to be disclosed to the government.

The timing for the order, slated for the second week of August, has slipped many times before, and there is no guarantee it won’t be delayed again. But internal discussions have already shifted from the substance of the measures to rolling out the order and accompanying rule, said the people familiar who spoke on condition of anonymity.

The restrictions won’t take effect until next year, and their scope will be laid out in a rulemaking process, involving a comment period so stakeholders can weigh in on the final version.

A spokeswoman for the National Security Council declined to comment.

The investment controls are part of a broader White House effort to limit China’s capabilities to develop the next-generation technologies expected to dominate national and economic security. The effort has complicated the Biden administration’s already fraught relations with China, which sees the restrictions as an effort to contain and isolate the country.

China’s envoy in Washington said earlier this month that Beijing would retaliate if the US imposes new limits on technology or capital flows but didn’t detail what actions the country could take.

Treasury Secretary Janet Yellen has sought to calm Chinese anger over the curbs, saying they wouldn’t significantly damage the ability to attract US investment and were narrowly tailored.

“These would not be broad controls that would affect US investment broadly in China, or in my opinion, have a fundamental impact on affecting the investment climate for China,” Yellen said in an interview with Bloomberg Television earlier in July.

Yellen emphasized the restrictions as well as existing export controls were not in retaliation for any specific actions from China or intended to curtail the country’s growth.

During her visit to China earlier this month, Yellen reiterated that stance in a meeting in Beijing with Chinese Vice Premier He Lifeng.

National Security Adviser Jake Sullivan first publicly discussed the concept in July 2021. China hawks in the US are eager for tougher and faster action. Lawmakers from both parties have also shown interest in legislating on the matte,r although a bill has not yet made it to Biden’s desk.

The Senate this week passed an amendment to the national defense policy bill that would require firms to notify the government about certain investments in China and other countries of concern, although they wouldn’t be subject to review or possible prohibition.

© 2023 Bloomberg LP


Samsung launched the Galaxy Z Fold 5 and Galaxy Z Flip 5 alongside the Galaxy Tab S9 series and Galaxy Watch 6 series at its first Galaxy Unpacked event in South Korea. We discuss the company’s new devices and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Meta to Release Open Source AI Model, Llama, to Compete Against OpenAI, Google’s Bard

Meta is releasing a commercial version of its open-source artificial intelligence model Llama, the company said on Tuesday, giving start-ups and other businesses a powerful free-of-charge alternative to pricey proprietary models sold by OpenAI and Google.

The new version of the model, called Llama 2, will be distributed by Microsoft through its Azure cloud service and will run on the Windows operating system, Meta said in a blog post, referring to Microsoft as “our preferred partner” for the release.

The model, which Meta previously provided only to select academics for research purposes, also will be made available via direct download and through Amazon Web Services, Hugging Face and other providers, according to the blog post and a separate Facebook post by Meta CEO Mark Zuckerberg.

“Open source drives innovation because it enables many more developers to build with new technology,” Zuckerberg wrote. “I believe it would unlock more progress if the ecosystem were more open.”

Making a model as sophisticated as Llama widely available and free for businesses to build atop threatens to upend the early dominance established in the nascent market for generative AI software by players like OpenAI, which Microsoft backs and whose models it already offers to business customers via Azure.

The first Llama was already competitive with models that power OpenAI’s ChatGPT and Google’s Bard chatbot, while the new Llama has been trained on 40 percent more data than its predecessor, with more than 1 million annotations by humans to fine-tune the quality of its outputs, Zuckerberg said.

“Commercial Llama could change the picture,” said Amjad Masad, chief executive at software developer platform Replit, who said more than 80 percent of projects there use OpenAI’s models.

“Any incremental improvement in open-source models is eating into the market share of closed-source models because you can run them cheaply and have less dependency,” said Masad.

The announcement follows plans by Microsoft’s largest cloud rivals, Alphabet’s Google and Amazon, to give business customers a range of AI models from which to choose.

Amazon, for instance, is marketing access to Claude – AI from the high-profile startup Anthropic – in addition to its own family of Titan models. Google, likewise, has said it plans to make Claude and other models available to its cloud customers.

Until now, Microsoft has focused on making technology available from OpenAI in Azure.

Asked why Microsoft would support an offering that might degrade OpenAI’s value, a Microsoft spokesperson said giving developers choice in the types of models they use would help extend its position as the go-to cloud platform for AI work.

Internal memo

For Meta, a flourishing open-source ecosystem of AI tech built using its models could stymie rivals’ plans to earn revenue off their proprietary technology, the value of which would evaporate if developers could use equally powerful open-source systems for free.

A leaked internal Google memo titled “We have no moat, and neither does OpenAI” lit up the tech world in May after it forecast just such a scenario.

Meta is also betting that it will benefit from the advancements, bug fixes and products that may grow out of its model becoming the go-to default for AI innovation, as it has over the past several years with its widely-adopted open source AI framework PyTorch.

As a social media company, Zuckerberg told investors in April, Meta has more to gain by effectively crowd-sourcing ways to reduce infrastructure costs and maximize creation of new consumer-facing tools that might draw people to its ad-supported services than it does by charging for access to its models.

“Unlike some of the other companies in the space, we’re not selling a cloud computing service where we try to keep the different software infrastructure that we’re building proprietary,” Zuckerberg said.

“For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make.”

Releasing Llama into the wild also comes with risks, however, as it supercharges the ease with which unscrupulous actors may build products with little regard for safety controls.

In April, Stanford researchers took down a chatbot they had built for $600 using a version of the first Llama model after it generated unsavory text.

Meta executives say they believe public releases of technologies actually reduce safety risks by harnessing the wisdom of the crowd to identify problems and build resilience into the systems.

The company also says it has put in place an “acceptable use” policy for commercial Llama that prohibits “certain use cases,” including violence, terrorism, child exploitation and other criminal activities.

© Thomson Reuters 2023


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft to Charge More for AI Features in Office 365 Software, Make More Secure Version of Bing Search

Microsoft on Tuesday said it would charge at least 53 percent more to access new artificial intelligence features in its widely used office software, in a glimpse at the windfall it hopes to reap from the technology.

The company also said it would make a more secure version of its Bing search engine available immediately to businesses, aiming to address their data-protection concerns, grow their interest in AI and compete more with Google.

At its virtual Inspire conference, the company said customers would pay $30 (roughly Rs. 2,500) per user, per month for its AI copilot in Microsoft 365, which promises to draft emails in Outlook, pen documents in Word and make virtually all an employee’s data accessible via the prompt of a chatbot.

The voluntary upgrade is on top of publicly listed, monthly plans ranging from $12.50 (roughly Rs. 1,000) per user to $57 (roughly Rs. 4,700), meaning the copilot could triple costs for some Microsoft customers.

In an interview, Jared Spataro, its corporate vice president, said the tool would pay for itself through time savings and productivity gains. The copilot summarizes Teams calls, for instance.

“You don’t take notes in meetings anymore, don’t attend some meetings,” he said. “It just changes the way you work.”

Spataro declined to forecast revenue from copilot, which at least 600 enterprises have tested since its March unveiling. The AI program, potentially expensive to operate, is not yet generally available.

In the meantime, Microsoft is pointing businesses to Bing Chat Enterprise, a bot in its search engine that can generate content and make sense of the internet, included with subscriptions used by some 160 million workers.

Unlike the public Bing that millions of web surfers have accessed in recent months, the enterprise version will not allow any viewing or saving of user data to train underlying technology. An employee would have to log in with work credentials to gain the protections.

The rollout follows growing industry concern about staffers entering confidential information into public chatbots, which human reviewers could read or AI could reproduce with careful prompting.

Asked if Bing users were unprotected until now, Spataro said Microsoft had made its privacy policies clear and was eager to bring AI to consumers. The company also announced the ability to upload images and search-related content, like Google allows.

Its corporate push for Bing may aid efforts to wrest search advertising share from Google at $2 billion (roughly Rs. 16,400 crore) in revenue per percentage point gain. It may also draw customers to Microsoft 365 Copilot, an AI upgrade giving access to business data and compliance controls.

“It’s a very strategic move for us,” Spataro said.

© Thomson Reuters 2023


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

UN Security Council Holds First Meeting on AI, Highlights Urgent Need for Global Regulation

The United Nations Security Council held its first meeting on artificial intelligence on Tuesday where China said the technology should not become a “runaway horse” and the United States warned against its use to censor or repress people.

Britain’s Foreign Secretary James Cleverly, who chaired the meeting under Britain’s July presidency of the body, said AI will “fundamentally alter every aspect of human life.”

“We urgently need to shape the global governance of transformative technologies because AI knows no borders,” he added after saying that AI could help address climate change and boost economies. But he also warned that the technology fuels disinformation and could aid both state and non-state actors in a quest for weapons.

The 15-member council was briefed by U.N. Secretary-General Antonio Guterres, Jack Clark, co-founder of high-profile AI startup Anthropic, and Professor Zeng Yi, co-director of the China-UK Research Center for AI Ethics and Governance.  

“Both military and non-military applications of AI could have very serious consequences for global peace and security,” Guterres said.

Guterres backs calls by some states for the creation of a new U.N. body “to support collective efforts to govern this extraordinary technology,” modeled on the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change.

China’s U.N. Ambassador Zhang Jun described AI as a “double-edged sword” and said Beijing supports a central coordinating role of the U.N. on establishing guiding principles for AI.

“Whether it is good or bad, good or evil, depends on how mankind utilizes it, regulates it and how we balance scientific development with security,” Zhang said, adding that there should be a focus on people and AI for good to regulate development and to “prevent this technology from becoming a runaway horse.”

Deputy U.S. Ambassador to the U.N., Jeffrey DeLaurentis, also said there was a need for countries to also work together on AI and other emerging technologies to address human rights risks that threaten to undermine peace and security.

“No member states should use AI to censor, constrain, repress or disempower people,” he told the council.

Russia questioned whether the council, which is charged with maintaining international peace and security, should be discussing AI.

“What is necessary is a professional, scientific, expertise-based discussion that can take several years and this discussion is already underway at specialized platforms,” said Russia’s Deputy U.N. Ambassador Dmitry Polyanskiy.

© Thomson Reuters 2023


Will the Nothing Phone 2 serve as the successor to the Phone 1, or will the two co-exist? We discuss the company’s recently launched handset and more on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version