Google’s ‘Genesis’ AI is capable of writing news article

At a time when major strikes are happening in Hollywood over the use of AI in film writing amongst various other issues, Google has introduced an AI that can alter journalism. Google’s Bard is already a successful generative AI tool but the Alphabet-owned company has introduced a new tool which can write news articles. The AI tool titled ‘Genesis’ is being pitched to news organizations like The New York Time and more to generate news articles.

Journalism about to change?

Why have hardworking journalists investigate matters which affect the general public to bring forward the truth when you can type in some easy prompts and get a news article in no time? The New York Times exclusive story reveals that the Genesis AI tool has the ability to create written content using data provided to it. The AI is capable of handling content from different genres and can churn out content based on any topic, be it current events or any other forms of information.

The New York Times is only one of the few organizations who has seen the demonstrations of this tool. Google has the belief that the tool can act as a personal assistant for journalists by automating few tasks, in turn making it easier to focus on other responsibilities. A Google spokesperson was quoted saying that the tool is not intended to, and can neither replace the essential role of journalists.

Google views Genesis as a responsible tool which can prevent the publishing industry from facing challenges generative AI. However, I don’t really belief issues caused by one AI can be solved by another.

Several journalists who witnessed Google’s presentation of the AI deemed it unsettling. Reports suggest that the presentation easily ignores the hard work and dedication which goes into creating well-crafted news articles. For now, Google’s AI tool remains in the testing stage, and in my opinion should be shelved entirely as the fourth pillar of democracy i.e. Journalism should not be rested on the wobbly shoulders of artificial intelligence.

OpenAI CEO Sam Altman warns developers to put safety limits on ChatGPT-like apps

The AI race is now in full swing. Many if not all tech players around the world have now stepped into the arena to battle out the AI wars and emerge as number one. Currently, OpenAI is leading the charts with its monumental ChatGPT which made waves across the world after launching in November 2022. However, while humanity is amazed at the capabilities of what conversational general AI is capable of doing, the flip side includes major concerns. Is it safe? Will it replace my job? Will AI become sentient? What is the future? are some of the questions running around on the internet surrounding ChatGPT and similar services which have either been released or are waiting to be released in the near future. Recently, OpenAI’s CEO Sam Altman sat down for an interview with ABC News where he shared his concerns about the AI race –

Sam Altman’s views on the AI race

During the interview, one major concern Sam Altman warned about was that there will be other developers who make a service similar to ChatGPT but do not put any safety limits on it. As of now, if ChatGPT is asked to write essays on controversial topics, it refuses to do so. Altman shared that not having safety protocols in place for a ChatGPT clone would be dire as the society as a whole does not have enough time to figure out how to react to it. Sam added furthermore that he is particularly worried about these large-scale language models being used to spread misinformation.

The OpenAI CEO was transparent about the issues concerning AI and also shared that AI will eventually cut a lot of current jobs. Altman was quoted saying “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this” When confronted as to why he feels scared about AI, he said if he doesn’t express his fright then people should either not trust him or be very unhappy that he is in this position. We’re only at the beginning of the AI revolution thus painting a pessimistic picture of the same is not fair. However, it will be interesting to see if this new technology is a boon or a curse to humanity!

Google announces AI features in Gmail, Docs, and more

It looks like AI has finally reached a point in our lives where we cannot escape it. After OpenAI became the inception of what can only be called the AI revolution, many tech giants have made it their primary goal to get on the AI bandwagon and excel. Google has now announced a suite of upcoming AI features for various Workspace apps which includes Google Docs, Gmail, Sheets, and Slides. Check out every AI implementation in Google apps below –

Google goes AI

The new features introduce a revamped way to generate, summarize, and brainstorm text with AI in Google Docs. This is somewhat if not entirely similar to how people use OpenAI’s ChatGPT. Users can also generate full emails in Gmail based on users’ bullet points. AI imagery and the ability to generate audio and visual presentations in Slides will also be present in the new Google AI services. The quick response to the changing AI landscape around the world by Google does hint towards the company’s ambition to catch up to all the competitors currently thriving in the sector.

Reports suggest that Google had declared ‘Code Red’ back in December when senior management of the company asked staff to add AI tools to almost all end-user products. While Google is making waves in the world of AI, the company’s approach to announcing the new services can only be described as hasty. While Google has already announced a plethora of new features, as of now, only AI writing tools in Docs and Gmail will be making their way to a group of ‘Trusted Testers’ in the US. Here are all the AI features coming to Google in the near future as per Google’s recent blog post

  • draft, reply, summarize, and prioritize your Gmail
  • brainstorm, proofread, write, and rewrite in Docs
  • bring your creative vision to life with auto-generated images, audio, and video in Slides
  • go from raw data to insights and analysis via auto-completion, formula generation, and contextual categorization in Sheets
  • generate new backgrounds and capture notes in Meet
  • enable workflows for getting things done in Chat

Google unveils ChatGPT rival named ‘Bard’

ChatGPT quickly became a topic of divided discussion all over the internet since its inception in November 2022. While some considered the chatbot AI to be a step in the right direction, others feared the concept altogether. Whether we like it or not, the chatbot AI is here to stay as now Google has announced its very own ChatGPT rival titled Bard. Google’s chatbox AI is based on its controversial LaMDA (Language Model for Dialogue Applications). The announcement comes only a few days after Google CEO Sundar Pichai revealed its development in an earnings call. It is reported that Google management labelled ChatGPT as ‘Code Red’ when the AI platform received a positive response from users around the world. Here is everything you need to know about Google’s chatbox AI Bard –

What is it?

Similar to ChatGPT, Google Bard is also an AI-powered chatbot that can offer solutions to your queries in a conversational way. Google has revealed that Bard draws information from the internet in order to provide high-quality and fresh responses. It is powered by LaMDA at its core which is Google’s language model built on Transformer, a neural network architecture. It was Google Research back in 2017 that invented and then open-sourced Transformer back in 2017. Readers will be surprised to know that ChatGPT relies on GPT-3 language model which was also built on Transformer.

Google Bard Access

As of now, Google Bard is not available for testing to the general public. Only a select few users have access to Google’s chatbot AI. In order to receive greater feedback on the chatbot AI, Google is releasing a lightweight model version of LaMDA that requires significantly less computing power. The post shared by Sundar Pichai reads that Google will combine external feedback with their own internal testing in order to ensure Bard’s responses are up to quality along with safety and groundedness in real-world information.

This will be the second time Google is trying its luck with LaMDA as it was back in July 2022 when a former employee had branded the AI sentient. Blake Lemoine, a former software engineer at Google has claimed that the language model offered sexist and racist responses. While Google has been working on its own language model for a while, it halted its public rollout due to the allegations from its employee. It will be interesting to see how this supposedly sentient chatbot AI turns out to be.

Exit mobile version