Apple Could Introduce AI Notification Summary, Conversational Voice for Siri at WWDC 2024: Report

Apple is working on introducing an artificial intelligence (AI)-powered notification summary feature when it unveils iOS 18 at the Worldwide Developers Conference (WWDC) 2024, as per a report. The Cupertino-based tech giant is likely to unveil several “practical” AI features when it hosts its annual developer-focused event on June 10. Another planned feature will reportedly make Siri more conversational as the AI might add natural and emotive speech to the virtual assistant. Apart from these, AI functionality could also make its way to Apple’s Safari, Photos, and Notes apps.

Apple to Upgrade Siri With AI Features

AppleInsider reports that the tech giant is planning to upgrade Siri in a major way by incorporating AI capabilities in it. Citing people familiar with the matter, the publication states that Apple has internally named its AI initiatives project Greymatter and it is working on introducing a feature called “Greymatter Catch Up”. It is essentially a notification summarisation feature, but the publication claims it will work via Siri.

Siri might also be able to handle complex commands better, the report claimed. It is said to be getting a new smart response framework and an on-device large language model (LLM) which will understand the context of a request and will adjust its replies. The report also expects the language to be more conversational with the inclusion of the LLM.

Another focus area for Apple is to integrate AI features into its existing apps to make the user experience smoother. For instance, a previous report claimed that the Safari browser might get a web page summarisation feature.

This could be similar to the features offered by Google’s Gemini and Microsoft’s Copilot. Additionally, a Web Eraser feature might also be introduced that can delete any element from a web page including banner ads, images, and text.

The Photos app could also get an AI tool dubbed ‘Clean Up’ which is said to offer ‘Photoshop-grade’ editing capabilities to users. The feature can reportedly remove any unwanted background objects from the image.

An AI-based real-time audio transcription feature could also be added to the Notes app. A report detailing this feature claimed that users will be able to read, edit, copy, and share these transcripts later. To easily catch up to long transcriptions or notes, a text summarisation feature is also being added to the Notes app.

Reportedly, these in-house AI features are being powered by Apple’s Ajax LLM, which the company is keeping under wraps for now.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Focuses on a Pragmatic AI Strategy as It Plans to Integrate New Features Within Core Apps: Report

Apple’s Worldwide Developers Conference (WWDC) 2024 event is likely to become one of the most important events in its recent history. The tech giant has been preparing its artificial intelligence (AI) strategy and new AI-powered features for its user base for the last one and a half years. The company’s CEO Tim Cook has also promised “exciting” generative AI features in the last two earnings calls to stakeholders. A new report now claims that the iPhone maker might take the route of pragmatism and offer features that are more practical for its users.

Apple’s Practical AI Vision

While the Cupertino-based tech giant would likely want to leave a lasting impact with its WWDC event on June 10, following up after OpenAI, Google, and Microsoft’s AI announcements might be difficult. Bloomberg’s Mark Gurman highlighted in his Power On newsletter, that instead of competing with the flashy and awe-inspiring AI advancements, the company is more likely to introduce features geared towards practical use.

Some of these features have been reported in the past. Siri could get an AI integration that would make her more conversational and adept at handling complex tasks. The Safari browser could also get an AI-powered web page summary feature. The Notes app is also said to get a live transcription feature. Additionally, a report also mentions that custom AI emojis could be coming to the iPhone.

As per the report, the idea is to offer users features that they can make use of daily. Utilising its massive user base could make it a top contender in the AI space. However, this is easier said than done. Gurman points out that Apple is still steadfast in its approach of making some of the more compute-heavy AI features available via servers. This could be difficult proposition for iPhone users to accept, especially since the company has spent years preaching data privacy and locally-processed features.ma

Apple’s OpenAI Deal

Apart from these, the tech giant might have another ace up its sleeve. The report claims Apple has closed a deal with OpenAI that will allow the iPhone maker to integrate ChatGPT within its devices. This would allow Apple to introduce one of the major chatbots on its smartphone and possibly Mac devices. If the company has any other surprises for the users, they will be known on June 10 when the keynote session of WWDC 2024 commences.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Sony ULT Series Speakers, Sony ULT Wear Wireless Headphones Launched in India



Check out our Latest News and Follow us at Facebook

Original Source

Apple to Reportedly Add AI-Powered Audio Transcription and Summarisation Features to Multiple iOS 18 Apps

Apple is reportedly working on two artificial intelligence (AI)-powered features that could be added to multiple apps in iOS 18. The Cupertino-based tech giant is said to be readying a real-time audio transcription and summarisation feature that could power its Voice Memos and the Notes app. These features could also appear on iPadOS 18 and macOS 15. Notably, these features as well as the next generation of Apple operating systems are expected to be unveiled at the company’s Worldwide Developers Conference (WWDC) scheduled for June 10.

According to a report by AppleInsider, the iPhone maker is leveraging AI to bring a real-time audio transcription feature that will let users read what is being said. Citing people familiar with the matter, the report highlighted that users will be able to read, edit, copy, and share these transcripts later. Alongside, the tech giant is also said to introduce a summarisation feature. These features are reported to be integrated into the Voice Memos app, Notes app, and more.

The Pixel smartphones already ship with a recording app that offers real-time transcriptions and conversation summaries. One of the more popular features of the smartphone lineup, people have used it to record meetings, important classes, or to make notes on the go. With Apple’s foray into AI, the Voice Memos app could also be revamped similarly.

As per the report, the transcriptions will be shown in the middle of the app window which currently shows a larger interface for the recorded audio. A transcription button, shaped like a speech bubble, is also being added, where tapping the bubble will show the transcription for a particular audio recording.

The Notes app is also expected to receive this feature as well as a summarisation feature that will provide a short description of the conversation, followed by the key points and action items in an easy-to-read format. These features are also reported to be added to iPadOS 18 and macOS 15.

Apple is also rumoured to use AI to significantly improve the capabilities of Siri. As per a recent report, the company’s native virtual assistant will get conversational speech, understanding of contextual language, and capabilities of understanding and executing complex commands that contain multiple steps.

Check out our Latest News and Follow us at Facebook

Original Source

Apple’s Siri Assistant Could Get a Massive AI-Charged Revamp at WWDC 2024: Report

Apple could introduce the biggest revamp to its native virtual assistant Siri since its launch at the upcoming Worldwide Developers Conference (WWDC) 2024. The Cupertino-based tech giant is rumoured to unveil its artificial intelligence (AI) strategy and introduce new features for its devices. As per a new report, the central piece of this move will be making Siri smarter and more efficient. The iPhone maker is expected to use either in-house AI models or licence them from a third-party source to improve Siri’s capabilities.

According to a report by the New York Times, top executives at Apple made the decision last year that its virtual assistant needs a major revamp to stay relevant. The realisation came as AI chatbots such as OpenAI’s ChatGPT showcased the diverse range of tasks they can complete. The inclusion of the contextual understanding of language, which allowed users to make vague queries and still get the right response, was also considered a significant upgrade. Citing unnamed people familiar with the matter, the report highlighted that Apple is working on adding AI capabilities to Siri.

The report highlighted that improving Siri has become a “tent pole project” at Apple’s Cupertino headquarters, which refers to a “once-in-a-decade” initiative in the company. It is said that the company is now gearing up to showcase the new Siri at the WWDC 2024 event on June 10. Two focus areas to improve Siri include conversational language and versatility of tasks, the report mentioned. However, it is believed that the tech giant does not want its virtual assistant to turn into another AI-powered chatbot.

It is believed that instead of turning Siri into a generalist chatbot capable of generating poetry and essays, its output will be controlled and limited to the tasks it already does, but with significant improvements. Users might be able to ask follow-up questions without repeating all the information, something Siri is not capable of currently. It might also be able to perform more tasks across the device. These details are not known at present.

However, it is said that Apple intends to keep Siri private and run it entirely on-device. This means the iPhone maker will be limited to its on-device neural processing unit (NPU) to power the computing and minimise the latency issues. This is interesting given an earlier report claimed that Apple is also working on building AI chips for its data centres.

The NY Times report claims Apple’s decision to not rely on cloud servers comes from cost-effectiveness. Highlighting an example, it said OpenAI is forced to spend 12 cents (roughly Rs. 16) for every 1,000 words generated by ChatGPT due to cloud computing costs. Apple might be able to circumvent this expense by keeping the feature within the device.

Check out our Latest News and Follow us at Facebook

Original Source

Apple CEO Tim Cook Hints at “Some Very Exciting” Generative AI Announcements Soon

Apple might reveal its artificial intelligence (AI) plans earlier than expected. It was believed that the Cupertino-based tech giant would unveil the AI features it is building during its Worldwide Developers Conference scheduled for June 10. However, CEO Tim Cook has now said that information about generative AI may be shared with users soon, as per a report. With Apple’s Let Loose event coming up on May 7, there is a slim possibility that the company will hint at the features it will be introducing later this year.

According to a report by CRN, Cook made the statements at the company’s quarterly earnings call. Apple has reportedly suffered a revenue decline of 4 percent year-on-year to bring it to $90.8 billion (roughly Rs. 7.5 lakhs crores). Addressing the stakeholders at the beginning of the call, Cook said, “We continue to feel very bullish about our opportunity in generative AI we are making significant investments, and we’re looking forward to sharing some very exciting things with our customers soon.”

The announcement highlights the iPhone maker’s intentions to go big on the AI trend. The Apple CEO also highlighted that the company’s innovation with its processors and neural engines gave it a strategic advantage over its rivals in integrating the technology into the devices. He also reportedly spoke about the “unwavering focus on privacy” hinting that the AI features will likely be powered on-device.

In the last few months, Apple’s AI ambitions have made headlines multiple times. The company has acquired at least two different companies, Darwin AI and Datakalab, working in the AI space. Apart from that, researchers employed by the tech giant have also published several papers on AI models with computer vision, on-device operations, and multimodal capabilities.

Earlier reports have also suggested some of the AI-powered features users might see later this year. The Safari browser is expected to play a key role, as it is rumoured to get an ‘Intelligent Search’ feature that will summarise articles and web pages opened. Another AI-powered web eraser feature has also surfaced that can delete banner ads and other elements in a web page based on users’ preferences. These features are expected to be showcased at WWDC 24 when Apple unveils iOS 18 and macOS 15.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Crypto Price Today: Bitcoin Price Falls Below $60,000, Dogecoin, Shiba Inu, Other Altcoins See Gains



Ubisoft’s Free-to-Play Shooter XDefiant Sets May 21 Release Date, Reveals Seasonal Roadmap



Check out our Latest News and Follow us at Facebook

Original Source

iPad Pro With M4 Chip Could Launch This Year, Will Be a ‘Truly AI-Powered Device’: Report

Apple is reportedly planning to skip the M3-chipset for the new iPad Pro and instead opt for the upcoming artificial intelligence (AI)-powered M4. It has not even been one year since the Cupertino-based tech giant introduced the M3 family of chipset and added it to its iMac, MacBook Air, and MacBook Pro. However, a report claims that the company could end the journey of its chipset short in favour of the M4 chipset with a new more powerful Neural engine that will bolster its AI vision.

According to a report by Bloomberg’s Mark Gurman on his Power On newsletter, Apple could introduce the M4-powered iPad Pro at its May 7 ‘Let Loose’ event. The report further highlights that the unannounced processor could be equipped with a new neural engine that will give it enough power to run compute for AI features. The company is also said to present the device as its “first truly AI-powered device”.

The new iPad Pro is believed to get a significant upgrade this year. It is rumoured to feature an OLED display, thinner bezels, and be available in “glossy and matte screen versions”. It could also sport MagSafe wireless charging support. For optics, it could get a revamped rear camera module and a landscape-oriented front camera. A new Apple Pencil is also expected to be unveiled at the Apple event which will likely be designed as an accessory for the iPad Pro.

The incorporation of the chipset will not end at the iPad Pro. An earlier report highlighted that Apple could revamp its entire Mac lineup, starting with Mac mini, with the M4 chipset. The model is expected to be launched either at the end of 2024 or early 2025. The tech giant could then introduce other Mac models with the M4 chip in 2025 and later. Gurman also highlighted that starting with the iPad Pro, Apple is likely to introduce all of its products as an AI device.

Gurman further reported that Apple could also brand the A18 chipset which is expected to feature in the iPhone 16 Pro models as AI-driven. This claim would also corroborate with multiple reports highlighting the company’s plans to introduce AI features via iOS 18. Based on the reports, it appears the Cupertino-based company is gearing up to go all-out in the AI race.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple OpenELM Open Source Small AI Models Released, Could Pave the Way for On-Device AI Features on iPhone

Apple has released its new family of artificial intelligence (AI) models dubbed OpenELM. Short for Open-source Efficient Language Models, there are a total of eight AI models with four pre-trained variants and four instruct variants. All of them are small language models (SLMs) that specialise in text-related tasks, highlighting an alignment with the tech giant’s reported ambitions of introducing on-device AI features this year. Notably, the company is also said to have acquired a French AI startup called Datakalab which works with computer vision models.

The OpenELM AI models were spotted on Apple’s Hugging Face page. Introducing the SLMs, the company said, “We introduce OpenELM, a family of Open-source Efficient Language Models. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy.” There are two variants — pre-trained and instruction — and they come with 270 million, 450 million, 1.1 billion, and 3 billion parameters.

Parameters refer to the neural networks in an AI model, which essentially means knowledge points with each point in the network containing information about a specific topic. The higher the number of parameters, the more efficiently it can understand and respond to complex questions. For reference, the recently released Microsoft Phi-3-mini contains 3.8 billion parameters whereas Google’s Gemma comes with 2 billion parameters. The pre-trained AI models are designed for general conversations and coherence in responses while the instruct variants are fine-tuned for task completion.

Small language models might not show an all-encompassing knowledge base or conversational capacity like ChatGPT or Gemini, but they are efficient at handling specific tasks and queries and are generally less error-prone. While Apple did not mention any specific use cases of the AI models, it offered the weights of the models available to the community. The weights are available under Apple’s sample code licence which allows its usage for both research and commercial purposes.

Apple leaning towards developing SLM highlights that the company is focused on its vision of on-device AI, as reported earlier. The company has so far also published papers on three other AI models including one that focuses on on-device capabilities, one that comes with multimodal capabilities, and another with computer vision that can understand smartphone screen interfaces.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Vivo X100s Live Image Surfaces Online; Suggests Flat Display, Quad Rear Cameras, More



Check out our Latest News and Follow us at Facebook

Original Source

Apple Could Reportedly Offer AI Features On-Device With iOS 18, But That Might Come at a Cost

Apple might be planning a big upgrade for its iPhone devices with the iOS 18 update. A new report has revealed that the Cupertino-based tech giant is working on introducing artificial intelligence (AI) features for its smartphones at the Worldwide Developers Conference (WWDC) 2024, expected to be held in June. Interestingly, the company might make all of the AI features available on-device, instead of keeping them cloud-based. Notably, a report last week highlighted that Apple could unveil a new AI-powered browsing feature for Safari which will let users summarise web pages.

The information comes from Bloomberg’s Mark Gurman’s Power On newsletter, where he answered the question of how much of Apple’s planned AI features might be cloud-based. As per Gurman, not much at all! The tech giant could make all the features available locally and have it processed on-device itself. This move, if true, can have both upsides and downsides depending on how the iPhone maker handles the issues with AI features.

Having AI features entirely locally is great for privacy and data security. This means any data shared with the app or the system feature never leaves the user’s iPhone, and the information is unlikely to ever reach a third party. This makes the device more secure, and users do not have to worry about their sensitive data either.

However, there is a downside. In fact, there are a couple of them. First, AI computers require significantly high amounts of processing power compared to the usual tasks performed by the device. Most large language models run GPU-based inference on computers. Even smartphones today are working to add special “AI processors” that come equipped with a combination of powerful CPU, GPU, and NPU (Neural Processing Unit). Despite this, running complex algorithms locally on the device can be a tricky task. This is why Samsung gives users a choice on whether they want to run certain Galaxy AI features on the device or through the servers.

For Apple to bring some of these features locally can be a challenging task. And this brings us to the second downside. If Apple remains intent on only offering on-device AI features, it may not be able to offer some of the features competitors are offering. For example, Galaxy AI has an Interpreter feature which translates a verbal conversation between two speakers standing near the phone in real time. Similarly, Oppo offers AI-powered image generation capabilities to users in China. Will Apple be able to optimise such features on-device? And will it be able to do it fast enough, so it does not fall behind the competition? WWDC 2024 might answer some of these questions.


Affiliate links may be automatically generated – see our ethics statement for details.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Huawei Renames Its P Series to Pura; Huawei Pura 70 Officially Teased



Check out our Latest News and Follow us at Facebook

Original Source

Apple Researchers Working on On-Device AI Model That Can Understand Contextual Prompts

Apple researchers have published a new paper on an artificial intelligence (AI) model that it claims is capable of understanding contextual language. The yet-to-be peer-reviewed research paper also mentions that the large language model (LLM) can operate entirely on-device without consuming a lot of computational power. The description of the AI model makes it seem suited for the role of a smartphone assistant, and it could upgrade Siri, the tech giant’s native voice assistant. Last month, Apple published another paper about a multimodal AI model dubbed MM1.

The research paper is currently in the pre-print stage and is published on arXiv, an open-access online repository of scholarly papers. The AI model has been named ReALM, which is shortened for Reference Resolution As Language Model. The paper highlights that the primary focus of the model is to perform and complete tasks that are prompted using contextual language, which is more common to how humans speak. For instance, as per the paper’s claim, it will be able to understand when a user says, “Take me to the one that’s second from the bottom”.

ReALM is made for performing tasks on a smart device. These tasks are divided into three segments — on-screen entities, conversational entities, and background entities. Based on the examples shared in the paper, on-screen entities refer to tasks that appear on the screen of the device, conversational entities are based on what the user has requested, and background entities refer to tasks that are occurring in the background such as a song playing on an app.

What is interesting about this AI model is that the paper claims despite taking on the complex task of understanding, processing, and performing actions suggested via contextual prompts, it does not require high amounts of computational energy, “making ReaLM an ideal choice for a practical reference resolution system that can exist on-device without compromising on performance.” It achieves this by using significantly fewer parameters than major LLMs such as GPT-3.5 and GPT-4.

The paper also goes on to claim that despite working in such a restricted environment, the AI model demonstrated “substantially” better performance than OpenAI’s GPT-3.5 and GPT-4. The paper further elaborates that while the model scored better on text-only benchmarks than GPT-3.5, it outperformed GPT-4 for domain-specific user utterances.

While the paper is promising, it is not peer-reviewed yet, and as such its validity remains uncertain. But if the paper gets positive reviews, that might push Apple to develop the model commercially and even use it to make Siri smarter.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Apple Reportedly Acquires Startup DarwinAI, Could Fuel Tim Cook’s AI Vision

Apple has reportedly acquired DarwinAI, a Canada-based startup focused on artificial intelligence (AI). The deal was struck earlier this year, and it is believed that it could boost the tech giant’s AI ambitions. In February, CEO Tim Cook revealed during the company’s quarterly earning calls that it was spending a “tremendous amount of time and effort” on AI. Not disclosing any details, he also hinted that some of these developments could be revealed later this year. Notably, reports have suggested that the iOS 18 update could add new AI capabilities to the iPhone.

According to a report by Bloomberg’s Mark Gurman, the Cupertino-based tech behemoth not only bought DarwinAI but also hired many of its employees. Gurman also added, citing unnamed sources, that the new hires have joined Apple’s AI division. It is believed that the acquisition occurred recently as the report hints that the deal could be announced officially later.

The co-founder and Chief Scientist of the startup, Alexander Wong, has reportedly also joined Apple as a Director of Machine Learning Research, the company’s AI division. Responding to Bloomberg’s query into the matter, the iPhone maker said it “buys smaller technology companies from time to time”. However, it did not reveal the purpose and the plans for this particular acquisition.

DarwinAI, as per its LinkedIn page, provided “manufacturers an end-to-end solution to improve product quality and increase production efficiency.” It is said to have developed an AI system that can visually inspect components in the manufacturing process. Wong had also reportedly invented a technique to make neural network models smaller and faster. It could be that Apple could utilise the same technology to bring on-device AI models and features to its devices.

While Cook hinted that AI was a major focus for the company in 2024, he did not reveal what exactly the company is planning. Some rumours have suggested that Apple could be working on its native foundation model called AppleGPT. Other rumoured features include an enhanced version of Siri that can function as a chatbot similar to ChatGPT and Google Gemini, AI-generated playlists for Apple Music, AI integration into Apple’s productivity apps such as Pages and Keynote, and more. It is also reported that the iOS 18 update, which is expected to arrive later this year alongside the iPhone 16 series, could bring several new AI features for Apple smartphones.


Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Paytm Gets Third-Party UPI App License From NPCI as Payments Bank Ceases Operations



Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version