Connect with us

Tech News

What are large language models (LLMs), why have they become controversial?

(TECHNOLOGY) Large language models guide our AI training and recently, ethicists have pointed out serious flaws in LLMs (which cost some their jobs).



artificial intelligence large language models

“Ethical” and “AI” aren’t two words often seen together (and one of them seems rare enough on its own these days), yet artificial intelligence ethics are extremely important for all of the non-artificial beings meandering around – especially when AI has the possibility to shape and influence real-world events.

The problems presented by unethical AI actions start with large language models (LLMs) and a fairly high-profile firing in Silicon Valley.

The Morning Brew’s Hayden Field explains that large language models are machine learning processes used to make AI “smarter” – if only perceptibly. You’ve seen them in use before if you use Google Docs, Grammarly, or any number of other services contingent on relatively accurate predictive text, including AI-generated emails and copy.

This style of machine learning is the reason we have things like GPT-3 (one of the most expansive large language models available) and Google’s BERT, which is responsible for the prediction and analysis you see in Google Search. It’s a clear convenience that represents one of the more impressive discoveries in recent history.

However, Field also summarizes the problem with large language models, and it’s not one we can ignore. “Left unchallenged, these models are effectively a mirror of the internet: the good, the mundane, and the disturbing,” she writes. Remember Microsoft’s AI experiment, Tay?! Yikes.

If you’ve spent any time in the darker corners of the Internet (or even just in the YouTube comment section) you’re aware of how profoundly problematic people’s observations can be. The fact that most, if not all of those interactions are catalogued by large language models is infinitely more troubling.

GPT-3 has a database spanning much of the known (and relatively unknown) Internet; as Field mentions, “the entirety of English-language Wikipedia makes up just 0.6% of GPT-3’s training data,” making it nearly impossible to comprehend just how much information the large language model has taken in.

So when the word “Muslim” was given to GPT-3 in an exercise in which it was supposed to finish the sentence, it should come as no surprise that in over 60 percent of cases, the model returned violent or stereotypical results. The Internet has a nasty habit of holding on to old information or biases as well as ones that are evergreen, and they’re equally available to inform large language models.

Dr. Timnit Gebru, a former member of Google’s Ethical AI division, recognized these problems and teamed up with Dr. Emily Bender of University of Washington and coworker Margaret Mitchell to publish a paper detailing the true dangers of the largest language models.

Gebru and Mitchell were fired within a few months of each other shortly after the paper warning of LLM dangers was published.

There is a hilariously high number of other ethical issues regarding large language models. They take up an inordinate amount of processing power, with one model training generating up to 626,000 pounds of CO2. They also tend to grow, making that impact higher over time.

They also have a lot of trouble incorporating languages that are not specifically American English due to the majority of training taking place here, making it tough for smaller countries or cultures to develop their own machine learning at a comparable pace, which widens the gap and strengthens ill perceptions that feed into the potential for prejudicial commentary from the AI.

The future of large language models is uncertain, but with the models being unsustainable, potentially problematic, and largely inaccessible to the majority of the non-English-speaking world, it’s hard to imagine that they will continue to accelerate upward. And given what we know about them now, it’s hard to see why anyone would want them to.

Jack Lloyd has a BA in Creative Writing from Forest Grove's Pacific University; he spends his writing days using his degree to pursue semicolons, freelance writing and editing, oxford commas, and enough coffee to kill a bear. His infatuation with rain is matched only by his dry sense of humor.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

Further – the hybrid B2B and B2C startup providing all-in-one learning

(TECHNOLOGY) The Further app “filters” the web to find new skills for a daily dose of badge-earning learning. Consider it your personal learning library!



There are a ton of resources dedicated to online learning, but the Further app “filters” the web to find new skills for a daily dose of badge-earning learning. Consider it your personal learning library in the palm of your hand. The Further app works to create a continuous learning experience for all, including students, employees, and trainees in a variety of industries.

“We grant intelligent access to high-quality educational content for everyone.”

Educational environments, such as schools and universities, can benefit from weaving in informal learning, increasing engagement. Consultants can use Further to increase their personal knowledge, but also provide professional knowledge to their clients. Safety and health training manuals can be completed in the app for manufacturing, food and beverage, healthcare, retail, and more. Lastly, software and tech employees can keep ahead of the trends by using the Further app.

How it works: Users can choose and collect content from multiple online sources to support their personal or professional skills. The app allows users to automate learning between family, friends, coworkers, and more through groups. Lastly, users are provided with reports to track their learning progress and are given rewards for completing items. Further uses AI to provide personalization through its own learning algorithm – the more it knows the user – the higher quality of educational suggestions it gives related to their goals.

In addition to the above, the Further app implements specific features to create a seamless learning experience. The app comes with a curated dashboard with feed customization, optimized for the users’ specific needs. The content center is bursting with resources that allows you to be in command of your education. In-app and push notifications can be enabled for reminders to complete tasks or grant access to updated trends in the news. And as with any great digital product startup, the Further app allows users to give feedback based on their experiences – you can submit ideas or future requests at their public Trello board (pretty cool if you ask me).

Request early access, download the mobile app, or try out the web extension for Chrome on desktop.

Continue Reading

Tech News

How psychologists are using VR to profile your personality

(TECH NEWS) VR isn’t just for gamers. Psychologists are using it to research how people emotionally respond to threats. But does it come at the cost of privacy?



Man using VR in personality test.

When you put on a VR headset for the first time, most people have that ‘whoa’ moment. You’ve entered an enchanting otherworldly place that seems real, but you know it isn’t. You slowly tilt your head up to see a nicely lit blue sky. You turn your head around to see mountains and trees that weren’t there before. And, you finally look down to stare at your hands. Replaced by bright-colored gloves, you flex your hands to form a fist, then jazz hands, and back.

Playing VR games is exciting and interesting for a lot of gamers, and you would (or maybe wouldn’t) be surprised to know that psychologists think so, too. According to The Conversation, psychologists have started researching how people emotionally respond to potential threats using VR.

Do you think this is weird or cool? I’ll let the following help you decide.

So, why did psychologists think using VR would help them in their research?

In earlier studies, psychologists tested “human approach-avoidance behavior”. By mixing real and virtual world elements, they “observed participants’ anxiety on a behavioral, physiological, and subjective level.” Through their research, they found that anxiety could be measured, and “VR provokes strong feelings of fear and anxiety”.

In this case, how did they test emotional responses to potential threats?

For the study, 34 participants were recruited to assess how people have a “tendency to respond strongly to negative stimuli.” Using a room-scaled virtual environment, participants were asked to walk across a grid of translucent ice blocks suspended 200 meters above the ground. Participants wore head-mounted VR displays and used handheld controllers.

Also, sensors placed on the participants’ feet would allow them to interact with the ice blocks in 2 ways. By using one foot, they could test the block and decide if they wanted to step on it. This tested risk assessment. By using both feet, the participants would commit to standing on that block. This tested the risk decision.

The study used 3 types of ice blocks. Solid blocks could support the participant’s weight and would not change in appearance. Crack blocks could also support the participant’s weight, but interacting with it would change its color. Lastly, Fall blocks would behave like Crack blocks, but would shatter completely when stepped on with 2 feet. And, it would lead to a “virtual fall”.

So what did they find?

After looking at the data, researchers found out that by increasing how likely an ice block would disintegrate, the “threat” for the participant also increased. And, of course, participants’ behavior was more calculated as more cracks appeared along the way. As a result, participants opted to test more blocks before stepping on the next block completely.

But, what else did they find?

They found that data about a person’s personality trait could also be determined. Before the study, each participant completed a personality questionnaire. Based on the questionnaire and the participants’ behavior displayed in the study researchers were able to profile personality.

During the study, their main focus was neuroticism. And, neuroticism is one of the five major personality traits used to profile people. In other words, someone’s personality could now also be profiled in a virtual world.

So, it all comes down to data and privacy. And yes, this isn’t anything new. Data collection through VR has been a concern for a long while. Starting this month, Facebook is requiring all new Oculus VR owners to link their Facebook account to the hardware. Existing users will be grandfathered in until 2023.

All in all, VR in the medical field isn’t new, and it has come a long way. The question is whether the risk of our personality privacy is worth the cost.

Continue Reading

Tech News

Amazon backtracks on hybrid return-to-work plan, allows work from home

(TECHNOLOGY) Amazon retracts its original statement proposing a hybrid work schedule and is now open to allowing employees to work from home indefinitely.



Samsung photo with amazon app loading page.

Let’s face it, companies can’t make up their mind regarding remote work. One week it’s this, the next week it’s that. Somehow, even though they have been running smoothly while working from home in the midst of the pandemic, employees are now suddenly considered to be “twiddling their thumbs.”


Following in the footsteps of other FAANG companies, in March 2021, Amazon said that their “plan is to return to an office-centric culture as our baseline. We believe it enables us to invest, collaborate, and learn together most effectively.”

What a stark contrast from the newest proposition: “At a company of our size, there is no one-size-fits-all approach for how every team works best” said Jassy, the now CEO of Amazon.  

Multi-member Zoom call on a Apple Mac laptop with a blue mug of black coffee next to it.

Contradictory, but admirable! Before this most recent announcement, Amazon was going to require all corporate works to adhere to a hybrid schedule of 3 days in office, unless otherwise specified. The hybrid work plan was set to begin in September 2021.

Now, the decision falls into the individual team’s hands and employees will be evaluated based on performance, despite where they choose to work. However, the underlying preference is to be located at least within reasonable distance to their core team’s office in order to come in on short notice.

“The company expects most teams will need a few weeks to develop and communicate their respective plans.”

Once plans are more finalized, Amazon will share specific details prior to January 3rd, 2022 – the date they initially planned for everyone to return to the office. Even though they may be a little indecisive, compared to Facebook, Apple, and Google, they’re actually being more flexible.

Finger snaps for the king of two-day shipping.

Now you have an excuse to pop open on a new private tab, while working from home, and buy a little something to celebrate. Seems counterintuitive to what we’re trying to prove here, but it’s necessary. Treat yo’self!

Continue Reading

Our Great Partners

American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!