Google Mulling Gemini AI-Powered Chatbot to Tell Personal Life Stories Using Photos, Search Activity: Report

Google is considering the development of a new chatbot that is capable of telling the story of a user’s life based on their photos and search history, according to a recent CNBC report. The search giant could use large language models (LLMs), such as the recently unveiled multimodal Gemini model, as part of a new AI project. Gemini is touted to compete with OpenAI’s GPT-4 model, and Google claims that its top-of-the-line model outperforms its closest competitor on some benchmarks.

A CNBC report that cites internal documents states that one of Google’s AI teams has suggested that the company develop an AI-based technology to use data from users’ smartphones — including photos and their search activity — that will be consumed by an AI-powered chatbot. The project, dubbed Project Ellman, can then use the information to provide answers to “previously impossible questions”, according to the report.

Instead of simply relying on “just pixels with labels and metadata”, Project Ellman would try to spot patterns in a user’s photos, studying photos and memories before and after the image to gain context, according to the report. The company’s internal document also envisions “Ellman Chat” becoming “Your Life Story Teller.”

Google currently collects user’s photos that are stored on the company’s servers as part of its Google Photos backup and sync feature. The company did not specify whether the data source would be from Google Photos synced to the cloud, or whether the images would be processed on the user’s device.

“This was an early internal exploration and, as always, should we decide to roll out new features, we would take the time needed to ensure they were helpful to people, and designed to protect users’ privacy and safety as our top priority”, a company spokesperson told the publication.

It is unclear whether Google is actively working on adding support for such a personalised AI chatbot that relies on its new Gemini AI models, which were unveiled by the company last week. Google’s most powerful model — Gemini Ultra — won’t be available until next year and is capable of outperforming OpenAI’s GPT-4 model in some tests, according to Google.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

New York state to invest in $10B chip research complex

New York state is joining tech giant IBM and semiconductor manufacturer Micron Technology to invest $10 billion in a state-of-the-art chip research facility at the University of Albany, Gov. Kathy Hochul announced.

NY Creates, a nonprofit entity that oversees The Albany NanoTech Complex where the 50,000-square-foot facility will be built, will supervise the project, according to The Wall Street Journal.

Upon its completion in 2026, the facility is expected to include some of the most advanced chip-making equipment in the world courtesy of ASML Holding, a Dutch company that sells machines worth upwards of hundreds of millions of dollars, The Journal reported.

Once the machinery is installed, the project and its partners — including material-engineering company Applied Materials and electronics firm Tokyo Electron — will work on next-generation chip manufacturing there, per The Journal, citing Hochul’s office.

ASML’s advanced machines use lasers and drops of tin in a highly-complex process that uses silicon and ultraviolet light to turn semiconductor materials into chips, according to the company’s website — all while keeping the chip “about 10,000 cleaner than the outside air.”

New York state is joining semiconductor leaders including IBM, Micron Technology, Applied Materials and Tokyo Electron in their investment in a $10 billion chip research facility at the University of Albany. Gregory P. Mango
The 50,000-square-foot manufacturing destination will feature multimillion-dollar chip-making equipment courtesy of ASML Holding. REUTERS

Acquiring machines capable of this advanced technology at this Albany complex expansion is part of the $53 billion Chips Act, which the Commerce Department initiated earlier this year to counter technological advances in China while boosting national security by slashing America’s reliance on imported chips.

New York state has committed $1 billion to the project, which will be used to purchase the ASML equipment and construct the building, The Journal reported.

The facility could also help New York’s bid to be the designated research hub under the Chips Act — which included $11 billion for a National Semiconductor Technology Center designed to advance domestic chip research and development, according to The Journal.

The University of Albany’s new building is set to have a larger impact on the economy.

Hochul’s office predicts its opening will create some 700 new jobs and bring in at least $9 billion in private money.

The Post has sought comment from Hochul’s office, as well as the University of Albany.

The Albany NanoTech Complex — which was first constructed in the late ’90s as a lone 70,000-square-foot facility and has since ballooned into a 1.65 million square-foot complex — has already made headway on its chip research efforts.

The University of Albany is set to welcome the chip-making facility in two years. It will be a part of its Albany NanoTech Complex. The first building in the complex opened in the 1990s.

New York boasts a number of large chip factories, including ones operated by semiconductor manufacturer GlobalFoundries, which works with San Diego, Calif.-based Qualcomm, the maker of chips that come in Android, Asus and Sony devices.

Fellow semiconductor manufacturing company Onsemi also boasts a manufacturing facility in Rochester, NY, and Wolfspeed, a semiconductor manufacturer that focuses on silicon carbide, expanded to the East Coast with the opening of its Marcy, NY, facility last year.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Releases Open Source MLX Framework for Efficient Machine Learning on Apple Silicon

Apple recently released MLX — or ML Explore — the company’s machine learning (ML) framework for Apple Silicon computers. The company’s latest framework is specifically designed to simplify the process of training and running ML models on computers that are powered by Apple’s M1, M2, and M3 series chips. The company says that MLX features a unified memory model. Apple has also demonstrated the use of the framework, which is open source, allowing machine learning enthusiasts to run the framework on their laptop or computer.

According to details shared by Apple on code hosting platform GitHub, the MLX framework has a C++ API along with a Python API that is closely based on NumPy, the Python library for scientific computing. Users can also take advantage of higher-level packages that enable them to build and run more complex models on their computer, according to Apple.

MLX simplifies the process of training and running ML models on a computer — developers were previously forced to rely on a translator to convert and optimise their models (using CoreML). This has now been replaced by MLX, which allows users running Apple Silicon computers to train and run their models directly on their own devices.

Apple shared this image of a big red sign with the text MLX, generated by Stable Diffusion in MLX
Photo Credit: GitHub/ Apple

 

Apple says that the MLX’s design follows other popular frameworks used today, including ArrayFireJax, NumPy, and PyTorch. The firm has touted its framework’s unified memory model — MLX arrays live in shared memory, while operations on them can be performed on any device types (currently, Apple supports the CPU and GPU) without the need to create copies of data.

The company has also shared examples of MLX in action, performing tasks like image generation using Stable Diffusion on Apple Silicon hardware. When generating a batch of images, Apple says that MLX is faster than PyTorch for batch sizes of 6,8,12, and 16 — with up to 40 percent higher throughput than the latter.

The tests were conducted on a Mac powered by an M2 Ultra chip, the company’s fastest processor to date — MLX is capable of generating 16 images in 90 seconds, while PyTorch would take around 120 seconds to perform the same task, according to the company.

Other examples of MLX in action include generating text using Meta’s open source LLaMA language model, as well as the Mistral large language model. AI and ML researchers can also use OpenAI’s open source Whisper tool to run the speech recognition models on their computer using MLX.

The release of Apple’s MLX framework could help make ML research and development easier on the company’s hardware, eventually allowing developers to bring better tools that could be used for apps and services that offer on-device ML features running efficiently on a user’s computer.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

iPhone 16 to Get ‘Substantial’ Microphone Upgrade for Improved Siri Experience With AI Features: Ming-Chi Kuo

iPhone 16 is expected to arrive next year as the successor to the iPhone 15 series of smartphones, and while Apple’s next handsets aren’t expected to debut until the second half of 2024, details of their specifications have already begun to surface online. According to TF Securities International analyst Ming-Chi Kuo, Apple’s upcoming iPhone 16 models will be equipped with an upgraded microphone that is designed to significantly improve the Siri experience and voice input on the company’s purported smartphones.

Kuo shared details of his latest industry survey that indicates Apple’s next smartphones will “feature a significant upgrade in microphone specifications” in a blog post. The biggest improvement to the microphone’s specifications would be to the signal-to-noise ratio (SNR). This is the measure of the strength of the signal to be recorded, relative to the background or ambient noise. The microphones on the iPhone 16 series will also offer better water resistance, according to Kuo.

Apple wants to upgrade the microphones on the iPhone 16 in order to improve Siri performance on the handsets, the analyst says, explaining that this could be connected to the Cupertino company’s decision to rejig its Siri team to work on including large language models (LLMs) and artificial intelligence-generated content (AIGC) in Q3 2023.

The inclusion of upgraded microphone technology is expected to come at a cost. According to Kuo, the average sale price (ASP) of the microphones for all models in the iPhone 16 series could rise substantially — between 100 percent and 150 percent more than their predecessors. He adds that Apple’s component suppliers Goertek and AAC are expected to benefit the most from the company’s hardware upgrades.

Kuo isn’t the first to predict that Apple will offer significant upgrades to Siri with AI improvements to the iOS operating system next year. In October, Bloomberg’s Mark Gurman said that Apple was working on major upgrades to the voice assistant’s capabilities, powered by Apple’s own LLM, along with better suggestions for the Messages app. Despite purported concerns with the AI features, the improved functionality could be introduced as early as 2024, according to Gurman.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OpenAI Postpones Launch of Custom GPT Store Announced at DevDay to Early 2024

ChatGPT maker OpenAI has delayed the launch of its custom GPT store until early 2024, according to an internal memo seen by Reuters on Friday.

During its first developer conference in November, OpenAI introduced the custom GPTs and store, which were set to be launched later that month.

The company is continuing to “make improvements” to GPTs based on customer feedback, the memo said.

The delay comes against the backdrop of the startup’s surprise ouster of its CEO Sam Altman and his subsequent reinstatement following threats by employees to quit.

The GPTs are early versions of AI assistants that perform real-world tasks such as booking flights on behalf of a user. It is also expected to allow users to share their GPTs and earn money based on the number of users.

Last month, OpenAI announced it intends to work with organisations to produce public and private datasets for training artificial intelligence (AI) models.

Popular chatbot ChatGPT, which can generate poems and prose from simple prompts, is based on large language models that are trained entirely on open-source data available on the Internet.

The company’s latest effort could help it produce more nuanced training data that are more conversational in style.

“We’re particularly looking for data that expresses human intention, across any language, topic and format,” the company said in a blog post.

OpenAI said it is seeking partners to help it create an open-source dataset for training language models. This dataset would be public for anyone to use in AI model training, it said.

The company said it is also preparing private datasets for training proprietary AI models.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

OnePlus AI Music Studio With Audio and Video Generation Features Launched, Global Contest Announced

OnePlus AI Music Studio was recently launched by the Chinese technology firm for users in India and global markets. Amid a rise in popularity of generative artificial intelligence (AI) technology — tools that allow users to create content like text, images, and even code, using simple commands or controls. You do not have to own a OnePlus device in order to access the service, and the company has announced a competition centred around the tool with rewards for users from specific regions.

The new AI music studio from OnePlus is available for both Indian and non-Indian users and allows users to sign up using their email address. You can pick between rap and electronic dance music (EDM) — pop is coming soon — along with moods like happy, energetic, romantic, and sad. OnePlus previously teased the AI-powered music video creator — it was previously believed to be a new speaker from the Chinese smartphone maker.

OnePlus AI Music Studio lets you pick genres, moods, and video styles
Photo Credit: OnePlus

 

Once you have picked the genre and mood of your video, you can then choose from a range of themes for your music video such as cyberpunk, nature, study and work, travel — there’s also a random option, along with an “AI Music Video” option. The website will then ask you to provide a prompt to describe the song. The tool will then generate an audio track and video based on the input you provided.

During our testing, we found that the AI tool produced different tracks when we reused the same options and prompts. The video generated showed an animated representation of a person that slowly turned into different characters. The vertical video (portrait orientation) can either be downloaded or published — choosing the latter will allow the company to share your video creation on the music studio’s home page. You can use the share button to get your friends to get more likes — to win prizes, according to the company.

In order to popularise the new AI service, OnePlus has announced a contest for users in India, North America, and Europe. The company says it will pick 100 entries from users in all three regions. Participants will need to submit their music tracks to the company by December 17 at 5pm.

Winners selected by the company will get coupons to redeem for products on OnePlus’ website, according to the company. The Chinese smartphone maker says that users can submit multiple entries, but those that have offensive or inappropriate content, or violate copyright laws, will be disqualified. The company is expected to announce the date of the results of the contest in the coming days.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.


Sam Altman Will Not Return as OpenAI CEO, Ex-Twitch Boss Emmett Shear to Take Up Role: Report



Sam Altman Was in Talks to Raise Billions for AI Chip Venture Before OpenAI Ouster



Check out our Latest News and Follow us at Facebook

Original Source

WhatsApp AI Chats Shortcut Rolling Out to Some Beta Testers Alongside Status Filters

WhatsApp has begun rolling out a visual change designed to improve the experience of using the upcoming artificial intelligence (AI) chats feature on the messaging platform. Earlier this year, the Meta-owned firm revealed that it was working on support for AI chatbots on the widely used messaging platform. These AI-powered assistants are currently available to some users in the US. Meanwhile, WhatsApp has also begun testing a new section for status updates that allows you to view and filter a list of status updates.

After updating to WhatsApp beta for Android 2.23.24.26 (via feature tracker WABetaInfo) some beta testers are seeing a new shortcut to initiate these chats, right from the main chat list, via a floating action button (FAB), on the latest beta version. A white button, with a multi-coloured ring is shown above the new chat button.

A screenshot of the new AI button shared by the feature tracker
Photo Credit: WABetaInfo

 

Meta announced in September that it was adding AI assistants to its messaging apps on WhatsApp, Instagram, and Messenger. These chatbots are powered by Meta’s large language model, Llama 2 and will be able to respond to user queries and search the web using Bing. The assistants will also be able to generate images using text prompts. They will also support AI avatars with a range of personalities, according to the company.

While it might be a while before Meta rolls out AI chats to users in other regions, the addition of the new button should make it easier for users to discover the feature on their own — compared to the additional steps required to create a new AI conversation via the new chat button.

On the other hand, WABetaInfo has also spotted a new filtered vertical list on WhatsApp beta for Android 2.23.25.3, that allows users to view all available statuses — the Instagram stories-like feature that is also available on WhatsApp — via a vertical list. The list of status updates also includes channels, according to the feature tracker.

In order to help users manage the list of statuses, the latest WhatsApp beta adds four new filters — All, Recent, Viewed, and Muted — at the top of the screen. The ability to organise and filter statuses is expected to make its way to users on the stable update channel on both iOS and Android in the future.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Microsoft Ignite 2023: Maia, Cobalt AI Chips to Power Copilot and Azure Services Announced

Microsoft on Wednesday announced a duo of custom-designed computing chips, joining other big tech firms that – faced with the high cost of delivering artificial intelligence services – are bringing key technologies in-house.

Microsoft said it does not plan to sell the chips but instead will use them to power its own subscription software offerings and as part of its Azure cloud computing service.

At its Ignite developer conference in Seattle, Microsoft introduced a new chip, called Maia, to speed up AI computing tasks and provide a foundation for its $30-a-month “Copilot” service for business software users, as well as for developers who want to make custom AI services.

The Maia chip was designed to run large language models, a type of AI software that underpins Microsoft’s Azure OpenAI service and is a product of Microsoft’s collaboration with ChatGPT creator OpenAI.

Microsoft and other tech giants such as Alphabet are grappling with the high cost of delivering AI services, which can be 10 times greater than for traditional services such as search engines.

Microsoft executives have said they plan to tackle those costs by routing nearly all of the company’s sprawling efforts to put AI in its products through a common set of foundational AI models. The Maia chip, they said, is optimized for that work.

“We think this gives us a way that we can provide better solutions to our customers that are faster and lower cost and higher quality,” said Scott Guthrie, the executive vice president of Microsoft’s cloud and AI group.

Microsoft also said that next year it will offer its Azure customers cloud services that run on the newest flagship chips from Nvidia and Advanced Micro Devices. Microsoft said it is testing GPT 4 – OpenAI’s most advanced model – on AMD’s chips.

“This is not something that’s displacing Nvidia,” said Ben Bajarin, chief executive of analyst firm Creative Strategies.

He said the Maia chip would allow Microsoft to sell AI services in the cloud until personal computers and phones are powerful enough to handle them.

“Microsoft has a very different kind of core opportunity here because they’re making a lot of money per user for the services,” Bajarin said.

Microsoft’s second chip announced Tuesday is designed to be both an internal cost saver and an answer to Microsoft’s chief cloud rival, Amazon Web Services.

Named Cobalt, the new chip is a central processing unit (CPU) made with technology from Arm Holdings. Microsoft disclosed on Wednesday that it has already been testing Cobalt to power Teams, its business messaging tool.

But Microsoft’s Guthrie said his company also wants to sell direct access to Cobalt to compete with the “Graviton” series of in-house chips offered by Amazon Web Services (AWS).

“We are designing our Cobalt solution to ensure that we are very competitive both in terms of performance as well as price-to-performance (compared with Amazon’s chips),” Guthrie said.

AWS will hold its own developer conference later this month, and a spokesman said that its Graviton chip now has 50,000 customers.

“AWS will continue to innovate to deliver future generations of AWS-designed chips to deliver even better price-performance for whatever customer workloads require,” the spokesman said after Microsoft announced its chip.

Microsoft gave few technical details that would allow gauging the chips’ competitiveness versus those of traditional chipmakers. Rani Borkar, corporate vice president for Azure hardware systems and infrastructure, said both are made with 5-nanometer manufacturing technology from Taiwan Semiconductor Manufacturing Co.

She added that the Maia chip would be strung together with standard Ethernet network cabling, rather than a more expensive custom Nvidia networking technology that Microsoft used in the supercomputers it built for OpenAI.

“You will see us going a lot more the standardization route,” Borkar told Reuters.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Hopes AI-Infused iOS 18 Update Will Help Sell iPhone 16 Series Lacking Major Hardware Upgrades: Gurman

Apple’s iOS 18 update — expected to arrive next year alongside the purported iPhone 16 series — is more ‘critical’ than usual, according to Bloomberg’s Mark Gurman. In his weekly newsletter, Gurman states that Apple’s next major operating system update is expected to bring major improvements over iOS 17 and is expected to help the company sell its next generation of smartphones. Apple recently confirmed that the company is working on adding features powered by artificial intelligence (AI) to the iOS 18 update.

According to details shared by Gurman in his weekly Power On newsletter, Apple’s iPhone 16 isn’t expected to offer extensive hardware upgrades next year, which means that the iOS 18 update needs to be “extra impressive” to convince customers to upgrade to the successors to the iPhone 15, iPhone 15 Plus, iPhone 15 Pro, and iPhone 15 Pro Max models unveiled this year.

In order to ensure the code for the iOS 18 update is stable and bug-free, the company reportedly paused development recently, in an attempt to weed out issues that were previously undetected. Gurman says that the week-long pause in development has delayed the company’s development towards the second milestone — adding that there are four such six-week milestones before Apple unveils its operating system updates every year at its Worldwide Developers Conference in June.

One of the reasons why Apple will need its code to be bug free is because iOS 18 is expected to bring “major new features and designs” — after years of relatively iterative iOS version updates, Gurman says, citing Apple’s senior management. it is currently unclear how many of these features will be available on previous iPhone models that are eligible to receive the update.

Apple’s rivals have already announced AI-powered features that are either rolling out to users, or expected to roll out in the coming months — including Microsoft Copilot on Windows 11 and Google Assistant with Bard. Siri is expected to get a few AI-powered upgrades this year, powered by Apple’s large language model (LLM). Other apps that will reportedly receive AI-backed upgrades include Pages, Keynote, Numbers, and Apple Music. We can expect to hear more about Apple’s software updates in the coming months.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Elon Musk Announces ‘Grok’ AI Chatbot for X Premium+ Subscribers, to Compete With ChatGPT

Elon Musk has unveiled the first AI model developed by xAI, a startup created by the owner of the microblogging platform formerly known as Twitter. Dubbed ‘Grok‘, the new AI model is designed to compete with rival technology offered by ChatGPT creator OpenAI, Google, and Microsoft. It will also offer a version of Musk’s humour regularly seen in his posts on X. The chatbot is currently being tested by a small number of users and will only be accessible to X Premium+ subscribers, according to the Tesla CEO.

In a blog post on Sunday, xAI announced that it was rolling out access to the Grok prototype to a “limited number of users” in the US ahead of a wider release. The startup has also revealed it will add support for new features and functionality over the coming months and users can join a waitlist to try out the new Grok chatbot on the xAI website. Grok will be available via X and a standalone app, according to Musk.

The AI startup has touted some of the Grok chatbot’s features, including the ability to access real-time information from X, and the ability to suggest questions. However, the company has also warned that Grok has a “rebellious streak” and will offer answers with “a bit of wit”. According to the firm, the chatbot was trained over a period of two months.

It runs on xAI’s new large language model (LLM), dubbed Grok-1. While development started four months ago, the startup says that the LLM has gone through several iterations and is currently capable of scoring 63.2 percent on the HumanEval coding task. It also scores 73 percent on the Massive Multitask Language Understanding (MMLU) dataset, according to the company.

There’s no word on when Grok will be available to all users, but Musk announced via X that access will be limited to paying subscribers. When Grok is released to the public, X Premium+ subscribers who pay $16 (roughly Rs. 1,300) a month will be able to use the chatbot via the microblogging platform.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version