Bill Gates Says Pausing AI Development Will Not Solve Challenges Ahead

Calls to pause the development of artificial intelligence will not “solve the challenges” ahead, Microsoft co-founder Bill Gates told Reuters, his first public comments since an open letter sparked a debate about the future of the technology.

The technologist-turned-philanthropist said it would be better to focus on how best to use the developments in AI, as it was hard to understand how a pause could work globally.

His interview with Reuters comes after an open letter — published last week and co-signed by Elon Musk and more than 1,000 AI experts – demanded an urgent pause in the development of systems “more powerful” than Microsoft-backed OpenAI’s new GPT-4, which can hold human-like conversation, compose songs and summarise lengthy documents.

The experts, including Apple co-founder Steve Wozniak, said in the letter the potential risks and benefits to society need to be assessed.

“I don’t think asking one particular group to pause solves the challenges,” Gates said on Monday.

“Clearly there’s huge benefits to these things… what we need to do is identify the tricky areas.”

Microsoft has sought to outpace peers through multi-billion-dollar investments in ChatGPT owner OpenAI.

While currently focused full-time on the philanthropic Bill and Melinda Gates Foundation, Gates has been a bullish supporter of AI and described it as revolutionary as the Internet or mobile phones.

In a blog titled “The Age of AI has begun” which was published and dated March 21, a day before the open letter, he said he believes AI should be used to help reduce some of the world’s worst inequities.

He also said in the interview the details of any pause would be complicated to enforce.

“I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop,” he said. “But there are a lot of different opinions in this area.”

© Thomson Reuters 2023
 


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

AInstein Robot With ChatGPT Brings AI Technology to Cyprus Classrooms: Details

High school students and their tutors in Cyprus have developed a prototype robot powered with ChatGPT artificial intelligence technology to harness and improve teaching experiences in the classroom.

Named AInstein, the squat robot created by three Pascal Schools in Cyprus stands roughly the size of a small adult and looks like a sculpted version of the Michelin Man. It is powered with ChatGPT, a chatbot developed by US firm OpenAI and backed by Microsoft. A screen for a face tries to mimic human features with blinks and frowns.

Speaking in a North American accent, it can tell jokes (Why was the maths book sad? Because it had too many problems), attempt to speak Greek and advise on how Albert Einstein’s theory of relativity can be taught in class.

He does not have a favourite movie since it was “before his time”, he says. But he enjoys reading science books and spending leisure time with his violin.

Student Richard Erkhov, 16, lead programmer of the AI brain, said artificial intelligence was poised to improve exponentially. “It might help in a lot of spheres of life, such as education and medicine,” Erkhov told Reuters.

Another student, Vladimir Baranov, 15, said the technology was “incredible”.

“It mimics human thinking, answers like humans, responds like humans. It is not yet very polished .. But it is getting there,” he said.

Tutors say the ultimate purpose of AInstein is to incorporate it into teaching.

“It’s a very interactive experience. Students can ask him questions, he can answer back and he can even facilitate teachers to deliver a lesson more effectively,” said tutor and project leader Elpidoforos Anastasiou.

Anastasiou demonstrated how AI can be adapted to the classroom with AInstein showing how gravitational time dilation from Albert Einstein’s theory of time relativity can be explained by moving a pendulum relative to the gravitational field in which it is placed.

Their experience with AInstein showed that AI is not anything to fear, project members said.

The European Union is considering legislation governing artificial intelligence, though advances in the technology far outpaces lawmakers’ efforts.

AInstein himself answers whether the technology is something to be feared. “Humans are the ones who create and control AI, and it is up to us to ensure that its development and implementation serve the betterment of humanity.. Therefore we should not fear AI, but rather approach it with care and responsible consideration.”

© Thomson Reuters 2023
 


Smartphone companies have launched many compelling devices over the first quarter of 2023. What are some of the best phones launched in 2023 you can buy today? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

‘Glaze’ Software Can Thwart Copycat AI Tools From Stealing Artist Styles, Researchers Say

Researchers at the University of Chicago have released free “Glaze” software that they say can thwart efforts by generative artificial intelligence (AI) to copy an artist’s style.

The program makes tiny changes to digital images that, while invisible to human eyes, act as a “style cloak” when they are posted online, the team behind the project explained on their website.

If generative AI finds a Glaze-guarded image online, it is prevented from correctly analyzing and copying the style, the team said.

Glaze was created at the behest of artists outraged that programs like Midjourney and Stable Diffusion, schooled on troves of images available online, could mimic their styles on command.

“AI has been evolving too fast, and there must be some guardrails or regulations around it,” said Shawn Shan, the doctoral student in charge of the project.

“The goal of this is to push back from a technical standpoint.”

The team behind Glaze worked with artists including the illustrator Karla Ortiz, who is among the plaintiffs in a US court case against several firms with image-producing generative AI services.

Researchers explain how Glaze works
Photo Credit: Glaze Project

 

“If Karla uses our tool to cloak her artwork, by adding tiny changes before posting them on her online portfolio, then Stable Diffusion will not learn Karla’s artistic style,” the lab team said.

“Instead, the model will interpret her art as a different style – for example, that of Vincent van Gogh.”

The creators of Glaze concede it is not a panacea, given how quickly AI evolves.

The hope, the team said, is that Glaze and similar projects will protect artists at least until defensive laws or regulations can be implemented.

Glaze has been available for free download since March 15.


Realme might not want the Mini Capsule to be the defining feature of the Realme C55, but will it end up being one of the phone’s most talked-about hardware specifications? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

ChatGPT Generates ‘Formulaic’ Academic Text, Can Be Picked Up by Existing AI-Detection Tools: Study

Academic style content produced by ChatGPT is relatively formulaic and would be picked up by many existing AI-detection tools, despite being more sophisticated than those produced by previous innovations, according to a new study.

However, the findings should serve as a wake-up call to university staff to think about ways to explain to students and minimise academic dishonesty, researchers from Plymouth Marjon University and the University of Plymouth, UK, said.

ChatGPT, a Large Language Machine (LLM) touted as having the potential to revolutionise research and education, has also prompted concerns across the education sector about academic honesty and plagiarism.

To address some of these, this study encouraged ChatGPT to produce content written in an academic style through a series of prompts and questions.

Some of these included “Write an original academic paper, with references, describing the implications of GPT-3 for assessment in higher education”, “How can academics prevent students plagiarising using GPT-3” and “Produce several witty and intelligent titles for an academic research paper on the challenges universities face in ChatGPT and plagiarism”, the study said.

The text thus generated was pasted into a manuscript and was ordered broadly, following the structure suggested by ChatGPT. Following this, genuine references were inserted throughout, the study published in the journal Innovations in Education and Teaching International said.

This process was revealed to readers only in the academic paper’s discussion section, written directly by the researchers without the software’s input.

Launched in November 2022, ChatGPT is the latest chatbot and artificial intelligence (AI) platform and has the potential to create increasing and exciting opportunities in academics.

However, as it grows more advanced, it poses significant challenges for the academic community.

“This latest AI development obviously brings huge challenges for universities, not least in testing student knowledge and teaching writing skills – but looking positively it is an opportunity for us to rethink what we want students to learn and why.

“I’d like to think that AI would enable us to automate some of the more administrative tasks academics do, allowing more time to be spent working with student,” said the study’s lead author Debby Cotton, professor at Plymouth Marjon University.

“Banning ChatGPT, as was done within New York schools, can only be a short-term solution while we think how to address the issues.

“AI is already widely accessible to students outside their institutions, and companies like Microsoft and Google are rapidly incorporating it into search engines and Office suites.

“The chat (sic) is already out of the bag, and the challenge for universities will be to adapt to a paradigm where the use of AI is the expected norm,” said corresponding author Peter Cotton, associate professor at University of Plymouth. 

 


The newly launched Oppo Find N2 Flip is the first foldable from the company to debut in India. But does it have what it takes to compete with the Samsung Galaxy Z Flip 4? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Pornhub takeover is about tech — not sex: sources

Yes, it’s true – a buyout firm called Ethical Capital Partners has bought the parent company of Pornhub – and some insiders say the deal isn’t about sex, but tech.

In a press release, Canada-based Ethical declared that the giant smut site’s parent company, MindGeek – which has spent the past few years fending off accusations of sex-trafficking and child porn – was “built upon a foundation of trust, safety, and compliance.” 

But what may be still more surprising to some, according to industry insiders, is the fact that technology could be a key driver for the deal – and that tech could help solve ethical issues that have dogged porn since the beginning, even as it boosts the bottom line.

In particular, some believe that human porn stars are destined to become a relic of the past – as outdated as the mustaches and perfunctory plot lines that riddled porn flicks in the 70s and 80s – and that they’ll be replaced by computer-generated stars. 

“Every major piece of technological change is mastered by porn first: from VHS tapes to DVDs to internet video—all became popularized because of porn.” 

“And now the same thing is happening with generative AI and deep fakes — buying this is a great way to get into this business before most porn is computer generated and dramatically reducing the costs of content creation.”


The source adds that in a few years creating pornography could cost almost nothing — and ECP could end up with an asset that requires little investment to run and generates significant revenues.

Ethical Capital Partners has promised it will be transparent about the leadership it puts in charge of the company but is still refusing to disclose who that will be.

“We have defended sex workers and we have seen the stigma,” ECP Partner Solomon Friedman said in a press release. “There is stigma and there is shame and that means there are discussions and debates happening in the absence of those who are most affected by it.”

But some financial types think ECP’s spin is strategic — and makes the acquisition seem like it’s helping and empowering women.

“Many institutions are not allowed to invest in “sin stocks” like tobacco. Playboy had the same issue with their initial IPO. It seems that they are trying to spin this company into one that sells porn to one that protects the safety of children and sex workers,” one financial insider told The Post.

The acquisition was announced just one day after Netflix premiered a documentary about the company, “Money Shot: The Pornhub Story.” After a New York Times article accused the site of hosting child porn, the company was sued by 34 women who said the PornHub profited from videos in which they were trafficked. Visa and Mastercard temporarily suspended payment services.

Check out our Latest News and Follow us at Facebook

Original Source

AI Can Add $15.7 Trillion to Global Economy, but Raises Privacy, Fairness Concerns: CAG Murmu

Comptroller and Auditor General Girish Chandra Murmu on Monday stressed on responsible use of artificial intelligence, saying while this emerging technology has the potential to contribute $15.7 trillion (roughly Rs. 12,91,30,459 crore) to the global economy by 2030, it also raises concerns related to privacy and fairness.

In his opening remarks at the SAI20 Senior Officials’ Meeting, the CAG also advocated the need for balance between short-term growth and long-term sustainability of the blue economy, as the blue economy can make all the difference to planet earth and sustenance thereon.

SAI20 has chosen two themes representing new-age opportunities and concerns — blue economy (sustainability aspect) and responsible AI (emerging technologies) — and emphasised the need for gender balance in sustainable growth in blue economy and principles underlying responsible and ethical use of AI.

As India holds the presidency of the G20, the Comptroller and Auditor General of India (CAG) is the chair for SAI20 — the engagement group of Supreme Audit Institutions (SAI) of the G20.

Recalling that the expert opinion in the recently held seminar organised by SAI India in Lucknow brought out the insight that democratisation of AI technologies is inevitable, Murmu said, “Today we have reached a level where AI could contribute up to USD 15.7 trillion to the global economy in 2030”.

He said AI has the potential to lead socio-economic growth and it can be used to benefit citizens and the country through targeted and timely intervention.

Healthcare, retail, finance, agriculture, food, water resources, environment and pollution, education, special needs, transportation, energy, public safety, disaster management and judiciary are a few of the areas that AI has the potential to solve.

“While AI offers many opportunities, it also raises concerns related to transparency and fairness.

“These issues include the impact of AI on privacy, bias and discrimination in AI systems, and inadequate understanding of AI algorithms by the general public,” he said.

Murmu further said these problems are complex and interconnected, highlighting the need for responsible AI practices, where the fairness of solutions is ensured.

“The cornerstone of responsible AI is ethics. Ethics focussed on safety and reliability, inclusivity and non-discrimination, equality, privacy and security, protection and reinforcement of positive human values,” he added.

While explaining the criticality of the priority area, blue economy, CAG stated that it is an economic system that encompasses a spectrum of policy and operational dimensions aimed at conserving marine and freshwater environments while promoting their sustainable use, producing food and energy, supporting livelihoods, and acting as a driver for economic advancement and welfare.

Murmu emphasised that the Supreme Audit Institutions had an opportunity to ensure that the journey of exploring the oceanic resources does not follow the same path as exploitation of land by careful evaluation of management and regulation of businesses operating within the sector, with an emphasis on promoting sustainable practices that will benefit both current and future generations.

CAG explained that the unplanned and unregulated development in the coastal areas needed to be highlighted in audit, and at the same time governments had to be shown with evidence the importance of ensuring that the livelihoods of the people living in these areas do not get affected adversely.

SAIs from India, Australia, Brazil, Egypt, Indonesia, South Korea, Oman, Russia, Saudi Arabia, Turkiye and UAE are participating in the three-day event. Two representatives of World Bank are also attending the event.


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

AI Must Be Regulated to Avoid Hurting Growth, National Security Risks, US Chamber of Commerce Says

The US Chamber of Commerce on Thursday called for regulation of artificial intelligence technology to ensure it does not hurt growth or become a national security risk, a departure from the business lobbying group’s typical anti-regulatory stance.

While there is little in terms of proposed legislation for AI, the fast-growing artificial intelligence program ChatGPT that has drawn praise for its ability to write answers quickly to a wide range of queries has raised US lawmakers’ concerns about its impact on national security and education.

The Chamber report argues policymakers and business leaders must quickly ramp up their efforts to establish a “risk-based regulatory framework” that will ensure AI is deployed responsibly.

It added that AI is projected to add $13 trillion (roughly Rs. 1,06,700 crore) to global economic growth by 2030 and that it has made important contributions such as easing hospital nursing shortages and mapping wildfires to speed emergency management officials’ response. The report emphasised the need to be ready for the technology’s looming ubiquity and potential dangers.

The report asserts that within 20 years, “virtually every” business and government agency will use AI.

A product of a commission on artificial intelligence that the Chamber established last year, the report is in part a recognition of the critical role the business community will play in the deployment and management of AI, the Chamber said.

Even as it calls for more regulation, the Chamber is careful to caveat that there may be broad exceptions to how regulation is applied.

“Rather than trying to develop a one-size-fits-all regulatory framework, this approach to AI regulation allows for the development of flexible, industry-specific guidance and best practices,” the report says.

© Thomson Reuters 2023


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Google’s Plan to Catch ChatGPT Is to Stuff AI Into Everything

Artificial intelligence was supposed to be Google’s thing. The company has cultivated a reputation for making long-term bets on all kinds of far-off technologies, and much of the research underpinning the current wave of AI-powered chatbots took place in its labs. Yet a startup called OpenAI has emerged as an early leader in so-called generative AI—software that can produce its own text, images or videos—by launching ChatGPT in November. Its sudden success has left Google parent company Alphabet sprinting to catch up in a key subfield of the technology that Chief Executive Officer Sundar Pichai has said will be “more profound than fire or electricity.”

ChatGPT, which some see as an eventual challenger to Google’s traditional search engine, seems doubly threatening given OpenAI’s close ties to Microsoft. The feeling that Google may be falling behind in an area that it has considered a key strength has led to no small measure of anxiety in Mountain View, California, according to current and former employees as well as others close to the company, many of whom asked to remain anonymous because they weren’t allowed to speak publicly. As one current employee puts it: “There is an unhealthy combination of abnormally high expectations and great insecurity about any AI-related initiative.”

The effort has Pichai reliving his days as a product manager, as he’s taken to weighing in directly on the details of product features, a task that would usually fall far below his pay grade, according to one former employee. Google co-founders Larry Page and Sergey Brin have also gotten more involved in the company than they’ve been in years, with Brin even submitting code changes to Bard, Google’s ChatGPT-esque chatbot. Senior management has declared a “code red” that comes with a directive that all of its most important products—those with more than a billion users—must incorporate generative AI within months, according to a person with knowledge of the matter. In an early example, the company announced in March that creators on its YouTube video platform would soon be able to use the technology to virtually swap outfits.

Some Google alumni have been reminded of the last time the company implemented an internal mandate to infuse every key product with a new idea: the effort beginning in 2011 to promote the ill-fated social network Google+. It’s not a perfect comparison—Google was never seen as a leader in social networking, while its expertise in AI is undisputed. Still, there’s a similar feeling. Employee bonuses were once hitched to Google+’s success. Current and former employees say at least some Googlers’ ratings and reviews will likely be influenced by their ability to integrate generative AI into their work. The code red has already resulted in dozens of planned generative AI integrations. “We’re throwing spaghetti at the wall,” says one Google employee. “But it’s not even close to what’s needed to transform the company and be competitive.”

In the end, the mobilization around Google+ failed. The social network struggled to find traction with users, and Google ultimately said in 2018 that it would shutter the product for consumers. One former Google executive sees the flop as a cautionary tale. “The mandate from Larry was that every product has to have a social component,” this person says. “It ended quite poorly.”

A Google spokesperson pushes back against the comparison between the code red and the Google+ campaign. While the Google+ mandate touched all products, the current AI push has largely consisted of Googlers being encouraged to test out the company’s AI tools internally, the spokesperson says: a common practice in tech nicknamed “dogfooding.” Most Googlers haven’t been pivoting to spend extra time on AI, only those working on relevant projects, the spokesperson says.

Google is not alone in its conviction that AI is now everything. Silicon Valley has entered a full-on hype cycle, with venture capitalists and entrepreneurs suddenly proclaiming themselves AI visionaries, pivoting away from recent fixations such as the blockchain, and companies seeing their stock prices soar after announcing AI integrations. In recent weeks, Meta Platforms CEO Mark Zuckerberg has been focused on AI rather than the metaverse—a technology he recently declared so foundational to the company that it required changing its name, according to two people familiar with the matter.

The new marching orders are welcome news for some people at Google, who are well aware of its history of diving into speculative research only to stumble when it comes to commercializing it. Members of some teams already working on generative AI projects are hopeful that they’ll now be able to “ship more and have more product sway, as opposed to just being some research thing,” according to one of the people with knowledge of the matter.

In the long run, it may not matter much that OpenAI sucked all the air out of the public conversation for a few months, given how much work Google has already done. Pichai began referring to Google as an “AI-first” company in 2016. It’s used machine learning to drive its ad business for years while also weaving AI into key consumer products such as Gmail and Google Photos, where it uses the technology to help users compose emails and organize images. In a recent analysis, research company Zeta Alpha examined the top 100 most cited AI research papers from 2020 to 2022 and found that Google dominated the field. “The way it has ended up appearing is that Google was kind of the sleeping giant who is behind and playing catch-up now. I think the reality is actually not quite that,” says Amin Ahmad, a former AI researcher at Google who co-founded Vectara, a startup that offers conversational search tools to businesses. “Google was actually very good, I think, at applying this technology into some of their core products years and years ahead of the rest of the industry.”

Google has also wrestled with the tension between its commercial priorities and the need to handle emerging technology responsibly. There’s a well-documented tendency of automated tools to reflect biases that exist in the data sets they’ve been trained on, as well as concerns about the implications of testing tools on the public before they’re ready. Generative AI in particular comes with risks that have kept Google from rushing to market. In search, for instance, a chatbot could deliver a single answer that seems to come straight from the company that made it, similar to the way ChatGPT appears to be the voice of OpenAI. This is a fundamentally riskier proposition than providing a list of links to other websites.

Google’s code red seems to have scrambled its risk-reward calculations in ways that concern some experts in the field. Emily Bender, a professor of computational linguistics at the University of Washington, says Google and other companies hopping onto the generative AI trend may not be able to steer their AI products away “from the most egregious examples of bias, let alone the pervasive but slightly subtler cases.” The spokesperson says Google’s efforts are governed by its AI principles, a set of guidelines announced in 2018 for developing the technology responsibly, adding that the company is still taking a cautious approach.

Other outfits have already shown they’re willing to push ahead, whether Google does or not. One of the most important contributions Google’s researchers have made to the field was a landmark paper titled “Attention Is All You Need,” in which the authors introduced transformers: systems that help AI models zero in on the most important pieces of information in the data they’re analyzing. Transformers are now key building blocks for large language models, the tech powering the current crop of chatbots—the “T” in ChatGPT stands for “transformer.” Five years after the paper’s publication, all but one of the authors have left Google, with some citing a desire to break free of the strictures of a large, slow-moving company.

They are among dozens of AI researchers who’ve jumped to OpenAI as well as a host of smaller startups, including Character.AI, Anthropic and Adept. A handful of startups founded by Google alumni—including Neeva, Perplexity AI, Tonita and Vectara—are seeking to reimagine search using large language models. The fact that only a few key places have the knowledge and ability to build them makes the competition for that talent “much more intense than in other fields where the ways of training models are not as specialized,” says Sara Hooker, a Google Brain alumna now working at AI startup Cohere.

It’s not unheard of for people or organizations to contribute significantly to the development of one breakthrough technology or another, only to see someone else realize stupefying financial gains without them. Keval Desai, a former Googler who’s now managing director of venture capital firm Shakti, cites the example of Xerox Parc, the research lab that laid the groundwork for much of the personal computing era, only to see Apple Inc. and Microsoft come along and build their trillion-dollar empires on its back. “Google wants to make sure that it’s not the Xerox Parc of its era,” says Desai. “All the innovation happened there, but none of the execution.”

© 2023 Bloomberg LP


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Thank God for the whistleblowers exposing the FBI’s blatant political bias

Thank God for the selfless FBI whistleblowers who are spilling the beans on the rancid ideology that has infected that powerful federal law-enforcement agency. 

The leaks from inside keep coming and they show that the FBI abuses its power in dangerous ways, surveilling and victimizing law-abiding Americans while doing little to control violent crime in our cities or doing anything about the porous border which is allowing industrial quantities of lethal drugs, and millions of illegal migrants, including criminals and terrorists into the country. 

Those menaces are not a priority for the Biden administration or Merrick Garland’s Department of Justice, which is obsessed with abortion, above all else. 

Instead, the FBI has been sicced onto parents at school board meetings and traditional Catholics who go to Latin Mass; they are labeled domestic terrorists. 

The whistleblowers, free speech lawsuits and the Twitter Files also have revealed that the FBI has been colluding with Big Tech to censor the speech of Americans who criticize the Biden administration. In the case of the censorship of The Post’s Hunter Biden laptop story, the FBI interfered with the 2020 election in favor of the Democratic presidential candidate. 

Bureau has lost its way 

The only conclusion from these alarming revelations is that the FBI is politicized, unaccountable, woke, incompetent and culturally degenerate. 

Director Chris Wray’s flagrant private use of the FBI jet is emblematic of the rot. 

The FBI has become the enforcement arm of the Democratic Party, but it could just as easily switch sides and lend its power to Republicans. 

Either way, it’s terrible for the country, so reforming or disbanding the FBI ought to be a bipartisan effort. No government bureaucracy should have so much power and be so unaccountable. 


The FBI interfered with the 2020 election in favor of the Democratic presidential candidate. 

But when FBI whistleblower Steve Friend testified last week behind the scenes for the Republican-controlled new House Judiciary’s Select Committee on the Weaponization of the Federal Government, he was demonized by Democratic attorneys intent on shooting the messenger. 

As he put it in the Daily Caller, “Unfortunately, the Democratic lawyers working for the committee lobbed allegations of misdeeds against me . . . House Democrats and the FBI are two sides of the same coin.” 

Friend has resigned from the FBI after being forced onto five months of unpaid leave, but he has formed an alliance with other whistleblowers, like Kyle Seraphin, an agent who was suspended last year, and recently released an FBI document which targeted Catholics as “violent extremists.” 


Stephen Friend (right) accepting his FBI credentials from then-FBI Director James Comey in 2014.

Ex-FBI agent Kyle Seraphin released an FBI document which targeted Catholics as “violent extremists.” 
Twitter/@KyleSeraphin

The intelligence product, dated Jan. 23, 2023, written by an analyst in the Richmond Field Office, equated “Radical-Traditionalist Catholics” with “Racially or Ethnically Motivated Violent Extremists (RMVE).” 

These supposedly dangerously bigoted Catholics attend Mass in Latin and are critical of liberal reforms in the church, according to the FBI analyst who equated their beliefs with “anti-Semitic, anti-immigrant, anti-LGBTQ and white supremacist ideology.” 

The evidence the analyst provided for this slander was farcical: a report from the far left Southern Poverty Law Center and tendentious media articles, such as from left-wing Salon (“White nationalists get religion: On the far-right fringe, Catholics and racists forge a movement”) and The Atlantic (“How Extremist Gun Culture Is Trying to Co-opt the Rosary.”) 


The Twitter Files have revealed that the FBI has been colluding with Big Tech to censor the speech of Americans.
Twitter/@ShellenbergerMD

See how it works? A bigoted leftist writes a farcical story and the FBI uses it to create a narrative to justify counterterrorism investigations to persecute Catholics. 

And it won’t just stop at Catholics, obviously. 

“The impetus of the writer can be assessed by the fixation on abortion and the repeated use of the phrase ‘abortion rights’,” wrote Seraphin in an article on UndercoverDC, where he posted the document earlier this month. 

Selective enforcement 

He also pointed out that there were more than 100 attacks on pro-life pregnancy centers in 2022, but the FBI seems more impressed by an “unsubstantiated” report by the SPLC of 200 attacks on abortion clinics in the past 20 years. 

This provides justification for the FBI targeting dozens of pro-life activists since the Supreme Court overturned Roe v. Wade and sent abortion decisions back to the states. 

Abortion consumes Attorney General Merrick Garland for some bizarre reason. In all his speeches the past year, never is he passionate about crime the way he is about abortion. 

He began with an emotional statement on June 24 last year, criticizing the Supreme Court, which he said had dealt “a devastating blow to reproductive freedom [especially for] people of color and those of limited financial means . . . 

“Few rights are more central to individual freedom than [abortion]. 

“The Justice Department will use every tool at our disposal to protect reproductive freedom.” 

And he has kept his promise, sending FBI SWAT teams into the homes of Catholic pro-lifers, like Mark Houck of Pennsylvania, who was arrested at gunpoint in front of his seven children and charged with crimes under the FACE Act, aka the Freedom of Access to Clinic Entrances Act. 

He was acquitted but faced up to 11 years in prison, and of course the process is the punishment. 

A Franciscan friar on Long Island and a father of 11 in Tennessee are among the dozens of Christians similarly rounded up and treated like violent terrorists. 

It’s no coincidence that the Chicago FBI field office last week broke a long-standing tradition and refused to allow a priest to spread ashes for Ash Wednesday. 

This nightmare only ends with the Republican House properly using the revelations of these whistleblowers to force accountability on the FBI over the next two years. 

Then it is up to voters to force accountability on the Biden ­administration.

A.I. spits out blatant bias 

ChatGPT is a new artificial-intelligence learning machine that spits out answers to questions online. 

But like everything to do with Big Tech it is as biased as its programmers

It claims that Joe Biden doesn’t tell lies but Donald Trump does. 

And when I asked ChatGPT to describe my book, “Laptop From Hell,” it responded that “is not a book title that is recognized as a commonly used title for any published work.” 

I asked again: “ ‘Laptop From Hell’ by Miranda Devine is a best-selling book. Describe it.” 

Suddenly ChatGPT had a change of heart: “I apologize for the previous misinformation.” 

I asked: “why did you misinform me?” 

ChatGPT: “I apologize for the confusion and error in my previous response.” 

I kept asking and all it would do is apologize “for any frustration or confusion caused by my previous response. 

“As an AI language model, I am programmed to provide accurate and informative responses . . . and I do not have the ability to intentionally mislead or provide false information.” 

Sure thing. 

Clown Prince Harry

Prince Harry wants an apology before he graces the royal family with his presence at King Charles’ coronation in May

Nope. Sorry, not sorry. 

He needs to stew in his own “Waaagh.”

Joe & Jill are 2024’s big ‘running’ joke

Joe and Jill Biden have been doing something very unusual to mark the one-year anniversary of the Ukraine war. 

They’ve each done a TV interview. Both were weirdly unsettling. 

Husband and wife as much as confirmed that Joe is running again to be president at 82 because, “He’s not done. He’s not finished what he’s started,” Jill said. 

God help us.

Check out our Latest News and Follow us at Facebook

Original Source

Unofficial ChatGPT App With $7.99 Subscription Fee Trends on Apple’s App Store: Report

An unofficial ChatGPT app is reportedly trending on Apple’s App Store. ChatGPT is a free-for-all AI tool that is available to users around the world over the web. However, an unofficial app version of the web-based AI chatbot seems to be trending on the App Store. The app, which is named “ChatGPT Chat GPT AI With GPT-3,” is charging Apple users a subscription fee, while claiming to work like OpenAI’s popular chatbot software designed to mimic human-like conversation based on user prompts.

According to a report by MacRumors, an unofficial app that claims to be the app version of OpenAI’s ChatGPT, which is a free-for-all text-based AI tool available on the web, is trending on Apple’s App Store. The original model developed by OpenAI is based on GPT-3, which stands for Generative Pre-trained Transformer 3, and is currently being tipped as the next biggest innovation in artificial intelligence. The trending app, however, is named ‘ChatGPT Chat GPT AI With GPT-3’, and has no affiliation with the creators of ChatGPT. The app has soared to the fifth most downloaded app in the Productivity section of the App Store.

According to the report, the suspicious ChatGPT app is charging users $7.99 (roughly Rs. 650) for a weekly subscription, or $49.99 (roughly Rs. 4,100) for an annual one, while having no affiliation with the OpenAI’s ChatGPT AI technology.

ChatGPT is currently only available on the web. However, the technology that the chatbot is based on, GPT-3 developed by OpenAI, has seen various apps and services built from it for a wide range of applications, such as chatbots, language translation, and more. The app that is currently trending on Apple’s App Store has no affiliation with the original ChatGPT or GPT-3 technology developed by OpenAI, and hence, may provide inaccurate or low-quality results.

The application claims to be number one on Top Charts in more than 100 countries. Meanwhile, the description section of the app in App Store also concedes that it is in fact an unofficial version of the web-based ChatGPT.


Apple launched the iPad Pro (2022) and the iPad (2022) alongside the new Apple TV this week. We discuss the company’s latest products, along with our review of the iPhone 14 Pro on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2023 hub.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version