adplus-dvertising
Connect with us

Science

Language Model Users Beware: 4 Pitfalls to Keep in Mind

Published

 on

Nowadays, language models like ChatGPT have been employed in a wide variety of tasks, ranging from fact-checking and email services to medical reporting and legal services.

Advertisements

While they are transforming our interaction with technology, it is important to remember that sometimes the information they give can be fabricated, conflicting, or old. As language models have this tendency to create false information, we need to be careful and aware of the problems that may arise when using them.

What Is a Language Model?

A language model is an AI program that can understand and create human language. The model is trained on text data to learn how words and phrases fit together to form meaningful sentences and convey information effectively.

Advertisements

The training is usually performed by enabling the model to predict the next word. After training, the model uses the learned ability to create text from a few initial words called prompts. For instance, if you provide ChatGPT with an incomplete sentence like “Techopedia is _____,” it will generate the following prediction: “Techopedia is an online technology resource that offers a wide range of articles, tutorials, and insights on various technology-related topics.”

The recent success of language models is primarily due to their extensive training in Internet data. However, while this training has improved their performance at many tasks, it has also created some issues.

Since the Internet contains incorrect, contradictory, and biased information, the models can sometimes give wrong, contradictory, or biased answers. It is, therefore, crucial to be cautious and not blindly trust everything generated by these models.

Advertisements

Hence, understanding the limitations of the models is vital to proceed with caution.

Hallucinations of Language Models

In AI, the term “hallucination” refers to the phenomenon where the model confidently makes incorrect predictions. It is similar to how people might see things that are not actually there. In language models, “hallucination” refers to when the models create and share incorrect information that appears to be true.

4 Forms of AI’s Hallucinations

Hallucination can occur in a variety of forms, including:

Fabrication: In this scenario, the model simply generates false information. For instance, if you ask it about historical events like World War II, it might give you answers with made-up details or events that never actually occurred. It could mention non-existent battles or individuals.

Factual inaccuracy: In this scenario, the model produces statements that are factually incorrect. For example, if you ask about a scientific concept like the Earth’s orbit around the Sun, the model might provide an answer that contradicts established scientific findings. Instead of stating the Earth orbits the Sun, the model might wrongly claim that the Earth orbits the Moon.

Sentence contradiction: This occurs when the language model generates a sentence that contradicts what it previously stated. For example, the language model might assert that “Language models are very accurate at describing historical events,” but later claim, “In reality, language models often generate hallucinations when describing historical events.” These contradictory statements indicate that the model has provided conflicting information.

Nonsensical content: Sometimes, the generated content includes things that make no sense or are unrelated. For example, it might say, “The largest planet in our solar system is Jupiter. Jupiter is also the name of a popular brand of peanut butter.” This type of information lacks logical coherence and can confuse readers, as it includes irrelevant details that are neither necessary nor accurate in the given context.

2 Key Reasons Behind AI’s Hallucinations

There could be several reasons that enable language models to hallucinate. Some of the main reasons are:

Data quality: Language models learn from a vast amount of data that can contain incorrect or conflicting information. When the data quality is low, it affects the model’s performance and causes it to generate incorrect responses. Since the models can not verify if the information is true, they may sometimes provide answers that are incorrect or unreliable.

Algorithmic limitations: Even if the underlying data is reliable, AI models can still generate inaccurate information due to inherent limitations in their functioning. As AI learns from extensive datasets, it acquires knowledge of various aspects crucial for generating text, including coherence, diversity, creativity, novelty, and accuracy. However, sometimes, certain factors, such as creativity and novelty, can take precedence, leading the AI to invent information that is not true.

Outdated Information

The language models like ChatGPT are trained on older datasets, which means they don’t have access to the latest information. As a result, the responses of these models may sometime be incorrect or outdated.

An example of how ChatGPT can present outdated information
When prompted with a question like “How many moons does Jupiter have?” NASA’s recent discovery indicates that Jupiter has between 80 and 95 moons. However, ChatGPT, relying on its data only up until 2021, predicts that Jupiter has 79 moons, failing to reflect this new finding.

This demonstrates how language models may provide inaccurate information due to outdated knowledge, making their responses less reliable. Additionally, language models can struggle to comprehend new ideas or events, further affecting their responses.

Therefore, when using language models for quick fact-checking or to get up-to-date information, it is essential to keep in mind that their responses may not reflect the most recent developments on the topic.

Impact of Context

Language models use previous prompts to enhance their understanding of user queries. This feature proves beneficial for tasks such as contextual learning and step-by-step problem-solving in mathematics.

However, it is essential to recognize that this reliance on context can occasionally lead to generating inappropriate responses when the query deviates from the previous conversation.

To get accurate answers, it is important to keep the conversation logical and connected.

Privacy and Data Security

Language models possess the capacity to utilize the information shared during interactions. Consequently, disclosing personal or sensitive information to these models carries inherent risks to privacy and security.

It is thus important to exercise caution and refrain from sharing confidential information when using these models.

The Bottom Line

Language models like ChatGPT have the potential to completely transform our interaction with technology. However, it is crucial to acknowledge the associated risks. These models are susceptible to generating false, conflicting, and outdated information.

They may experience “hallucinations” producing made-up details, factually incorrect statements, contradictory answers, or nonsensical responses. These issues can arise due to factors such as low data quality and inherent limitations of the algorithms employed.

The reliability of language models can be impacted by low data quality, algorithmic limitations, outdated information, and the influence of context.

Moreover, sharing personal information with these models can compromise privacy and data security, necessitating caution when interacting with them.

 

728x90x4

Source link

Continue Reading

Science

The body of a Ugandan Olympic athlete who was set on fire by her partner is received by family

Published

 on

 

NAIROBI, Kenya (AP) — The body of Ugandan Olympic athlete Rebecca Cheptegei — who died after being set on fire by her partner in Kenya — was received Friday by family and anti-femicide crusaders, ahead of her burial a day later.

Cheptegei’s family met with dozens of activists Friday who had marched to the Moi Teaching and Referral Hospital’s morgue in the western city of Eldoret while chanting anti-femicide slogans.

She is the fourth female athlete to have been killed by her partner in Kenya in yet another case of gender-based violence in recent years.

Viola Cheptoo, the founder of Tirop Angels – an organization that was formed in honor of athlete Agnes Tirop, who was stabbed to death in 2021, said stakeholders need to ensure this is the last death of an athlete due to gender-based violence.

“We are here to say that enough is enough, we are tired of burying our sisters due to GBV,” she said.

It was a somber mood at the morgue as athletes and family members viewed Cheptegei’s body which sustained 80% of burns after she was doused with gasoline by her partner Dickson Ndiema. Ndiema sustained 30% burns on his body and later succumbed.

Ndiema and Cheptegei were said to have quarreled over a piece of land that the athlete bought in Kenya, according to a report filed by the local chief.

Cheptegei competed in the women’s marathon at the Paris Olympics less than a month before the attack. She finished in 44th place.

Cheptegei’s father, Joseph, said that the body will make a brief stop at their home in the Endebess area before proceeding to Bukwo in eastern Uganda for a night vigil and burial on Saturday.

“We are in the final part of giving my daughter the last respect,” a visibly distraught Joseph said.

He told reporters last week that Ndiema was stalking and threatening Cheptegei and the family had informed police.

Kenya’s high rates of violence against women have prompted marches by ordinary citizens in towns and cities this year.

Four in 10 women or an estimated 41% of dating or married Kenyan women have experienced physical or sexual violence perpetrated by their current or most recent partner, according to the Kenya Demographic and Health Survey 2022.

The Canadian Press. All rights reserved.

Source link

Continue Reading

News

The ancient jar smashed by a 4-year-old is back on display at an Israeli museum after repair

Published

 on

 

TEL AVIV, Israel (AP) — A rare Bronze-Era jar accidentally smashed by a 4-year-old visiting a museum was back on display Wednesday after restoration experts were able to carefully piece the artifact back together.

Last month, a family from northern Israel was visiting the museum when their youngest son tipped over the jar, which smashed into pieces.

Alex Geller, the boy’s father, said his son — the youngest of three — is exceptionally curious, and that the moment he heard the crash, “please let that not be my child” was the first thought that raced through his head.

The jar has been on display at the Hecht Museum in Haifa for 35 years. It was one of the only containers of its size and from that period still complete when it was discovered.

The Bronze Age jar is one of many artifacts exhibited out in the open, part of the Hecht Museum’s vision of letting visitors explore history without glass barriers, said Inbal Rivlin, the director of the museum, which is associated with Haifa University in northern Israel.

It was likely used to hold wine or oil, and dates back to between 2200 and 1500 B.C.

Rivlin and the museum decided to turn the moment, which captured international attention, into a teaching moment, inviting the Geller family back for a special visit and hands-on activity to illustrate the restoration process.

Rivlin added that the incident provided a welcome distraction from the ongoing war in Gaza. “Well, he’s just a kid. So I think that somehow it touches the heart of the people in Israel and around the world,“ said Rivlin.

Roee Shafir, a restoration expert at the museum, said the repairs would be fairly simple, as the pieces were from a single, complete jar. Archaeologists often face the more daunting task of sifting through piles of shards from multiple objects and trying to piece them together.

Experts used 3D technology, hi-resolution videos, and special glue to painstakingly reconstruct the large jar.

Less than two weeks after it broke, the jar went back on display at the museum. The gluing process left small hairline cracks, and a few pieces are missing, but the jar’s impressive size remains.

The only noticeable difference in the exhibit was a new sign reading “please don’t touch.”

The Canadian Press. All rights reserved.

Source link

Continue Reading

News

B.C. sets up a panel on bear deaths, will review conservation officer training

Published

 on

 

VICTORIA – The British Columbia government is partnering with a bear welfare group to reduce the number of bears being euthanized in the province.

Nicholas Scapillati, executive director of Grizzly Bear Foundation, said Monday that it comes after months-long discussions with the province on how to protect bears, with the goal to give the animals a “better and second chance at life in the wild.”

Scapillati said what’s exciting about the project is that the government is open to working with outside experts and the public.

“So, they’ll be working through Indigenous knowledge and scientific understanding, bringing in the latest techniques and training expertise from leading experts,” he said in an interview.

B.C. government data show conservation officers destroyed 603 black bears and 23 grizzly bears in 2023, while 154 black bears were killed by officers in the first six months of this year.

Scapillati said the group will publish a report with recommendations by next spring, while an independent oversight committee will be set up to review all bear encounters with conservation officers to provide advice to the government.

Environment Minister George Heyman said in a statement that they are looking for new ways to ensure conservation officers “have the trust of the communities they serve,” and the panel will make recommendations to enhance officer training and improve policies.

Lesley Fox, with the wildlife protection group The Fur-Bearers, said they’ve been calling for such a committee for decades.

“This move demonstrates the government is listening,” said Fox. “I suspect, because of the impending election, their listening skills are potentially a little sharper than they normally are.”

Fox said the partnership came from “a place of long frustration” as provincial conservation officers kill more than 500 black bears every year on average, and the public is “no longer tolerating this kind of approach.”

“I think that the conservation officer service and the B.C. government are aware they need to change, and certainly the public has been asking for it,” said Fox.

Fox said there’s a lot of optimism about the new partnership, but, as with any government, there will likely be a lot of red tape to get through.

“I think speed is going to be important, whether or not the committee has the ability to make change and make change relatively quickly without having to study an issue to death, ” said Fox.

This report by The Canadian Press was first published Sept. 9, 2024.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Trending