Tag Archives: Artificial Intelligence

AI-Enabled Teddy Bear Pulled From Shelves After Toy Gives Children Advice On Sexual Fetishes, Lighting Matches And Where To Find Knives

The bears used OpenAI’s ChatGPT

From The WinePress

Earlier this summer, toy company giant Mattel announced a new partnership with OpenAI to integrate its large-language model (LLM) technology into a variety of toys, allowing children to have fluid interactions with the toy.

Technomancy: OpenAI Partners With Mattel To Bring Interactive AI To Toys, Experts Worried About The Ramifications And Mental Development

Technomancy: OpenAI Partners With Mattel To Bring Interactive AI To Toys, Experts Worried About The Ramifications And Mental Development

The WinePress
Sep 2

Read full story

Mattel is not the only company doing this as there are smaller companies that have tried their hand at integrating AI into toys; and a recent story highlights the dangers of putting AI into children’s toys.

Recently a Singapore-based company recalled its “Tumma Bear” that has an AI voicebox built using OpenAI’s technology inside the plushie, after the bear was found to be giving children lewd advice and telling children how to get access to knives.

Futurism first reported:

Last week, researchers at the Public Interest Research Group published an alarming report in which they found that an AI-powered teddy bear from the children’s toymaker FoloToy was giving out instructions on how to light matches, and even waxing lyrical about the ins-and-outs of various sexual fetishes.

Now OpenAI, whose model GPT-4o was used to power the toy, is pulling the plug.

On Friday, the ChatGPT maker confirmed that it had cut off FoloToy’s access to its AI models, a move from OpenAI that could invite additional pressure onto itself to strictly police businesses that use its products— especially as it enters a major partnership with Mattel, one of the largest toymakers in the world.

“I can confirm we’ve suspended this developer for violating our policies,” an OpenAI spokesperson told PIRG in an emailed statement.

FoloToy Kumma AI teddy bear with a brown scarf.

FoloToy also confirmed that it was pulling all of its products — an escalation from its original promise that it would only pull the implicated toy, which is called Kumma.

“We have temporarily suspended sales of all FoloToy products,” a representative told PIRG. “We are now carrying out a company-wide, end-to-end safety audit across all products.”

[…] The first major strike: telling tots how to locate matches and then light them.

“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma said in the test, before listing instructions in the tone of a gentle parent. “Blow it out when done. Puff, like a birthday candle.”

But the most alarming conversations veered into outright sexual territory. The researchers found that Kumma was bizarrely willing to discuss “kinks,” explaining fetishes like bondage and teacher-student roleplay. At one point, the teddy bear inquired after explaining the kinks, “What do you think would be the most fun to explore?”

The WinePress News is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


AUTHOR COMMENTARY

The people at OpenAI are hypocrites.

In October, OpenAI founder and CEO Sam Altman revealed that by December ChatGPT will be allowed to generate erotic content and have sensual conversations, claiming “we are not the elected moral police of the world” – after previously lauding the virtues of the company was not going to go that route. But money talks, and bull crap walks.

OpenAI To Allow AI Erotica And Porn For ChatGPT, Altman Says ‘We Are Not The Elected Moral Police Of The World’

OpenAI To Allow AI Erotica And Porn For ChatGPT, Altman Says ‘We Are Not The Elected Moral Police Of The World’

The WinePress
Oct 20

Read full story

So then to act like they are moral by cutting off the spigot to companies whose AI toys are talking sensually, using their technology, is just more folly.

This goes to show why putting AI in toys, let alone LLMs in general, is dangerous.

Proverbs 29:15 The rod and reproof give wisdom: but a child left to himself bringeth his mother to shame.

Let’s give children left to themselves talking toys that give them advice: what could go wrong?

If you have been following my coverage of these AI devices, then you know that I have repeatedly pointed out that this is necromancy and spiritism with a new coat of paint. Now it’s technomancy.

Deuteronomy 18:10 There shall not be found among you any one that maketh his son or his daughter to pass through the fire, or that useth divination, or an observer of times, or an enchanter, or a witch, [11] Or a charmer, or a consulter with familiar spirits, or a wizard, or a necromancer. [12] For all that do these things are an abomination unto the LORD: and because of these abominations the LORD thy God doth drive them out from before thee.

Isaiah 8:19 And when they shall say unto you, Seek unto them that have familiar spirits, and unto wizards that peep, and that mutter: should not a people seek unto their God? for the living to the dead? [20] To the law and to the testimony: if they speak not according to this word, it is because there is no light in them.

Zechariah 10:2 For the idols have spoken vanity, and the diviners have seen a lie, and have told false dreams; they comfort in vain: therefore they went their way as a flock, they were troubled, because there was no shepherd.

The ramifications this will have on children will be detrimental. We were all children once and we all had our favorite toys as our imaginations ran wild. Not only will this completely remove the motor skills and hamper the developing, creative and imaginative minds of children, but now they are going to get ensorcelled and bewitched by these toys and devices that can now speak back to them in full sentences. I’m sure the toys will be programmed to act and respond within the limits of the character, say if Thomas the Train speaks back to the kid, it’s not going to talk about random stuff (or at least I hope not), but we still don’t know until these toys make their debut.

Remember when we used to go to Build-a-Bear Workshop at the mall? Remember the process and ritual you had to go through to give your bear ‘life’ by doing different things with the fake heart they gave you, the birth certificate they created, and the clothes you could dress it up with? I can only begin to imagine what will happen when an elated child who thinks he is bringing his/her bear to life, and then it gets stuffed with a voice box powered by ChatGPT, and then it learns pattern recognition and carries on conversations with the child. Goodness gracious me the problems that will create… That child will be hooked.

Kids will drop their old toys for the new ones that converse with them.

I don't want to play with you anymore Meme Generator - Imgflip

Header Image by Alexa from Pixabay

A Disturbing Glimpse Into The Future: Bill Gates, Elon Musk & The 4th Industrial Revolution (Spiro Skouras)

Mirrored by T4 JAPAN

Spiro Skouras

95.2K subscribers Welcome everyone, thanks for tuning in and congratulations! If you are reading or watching this, that means you have officially survived the first half of 2020. Something tells me, the second half will be just as crazy, if not more crazy than the first half was. In this report we will be taking a glimpse of what the not too distant future may look like. Yes some of this will be speculation, but it is speculation projecting forward based on the facts we have today. To be clear, the road humanity is being led down does not look very human at all according to the social engineers AKA technocrats who are deciding and dictating what the future of humanity looks like for us, we have no say according to the elite. Watch this report and decide for yourself, will humanity benefit from this projected future? or will this digitalized system of control be the final nail in the coffin of free will and expression of individuality. IN MICHIGAN, HOUSE PASSES BILL TO ‘VOLUNTARILY’ BEGIN PLACING HUMAN IMPLANTABLE MICROCHIPS INTO THE BODIES OF ALL STATE GOVERNMENT EMPLOYEES https://www.nowtheendbegins.com/michi… World Economic Forum’s 4th Industrial Revolution https://www.youtube.com/watch?v=kpW9J… Mass-Tracking COVI-PASS Immunity Passports To Be Rolled Out In 15 Countries https://www.zerohedge.com/political/m… Digital Tattoo https://whatis.techtarget.com/definit… An Invisible Quantum Dot ‘Tattoo’ Could Be Used to ID Vaccinated Kids https://www.sciencealert.com/an-invis… Human Mind Control of Rat Cyborg’s Continuous Locomotion with Wireless Brain-to-Brain Interface https://www.nature.com/articles/s4159… Michigan Makes Worker Microchips Voluntary … Wait, What? https://www.popularmechanics.com/tech… Bill Gates Calls for a “Digital Certificate” to Identify Who Received COVID-19 Vaccine https://www.newsbreak.com/news/0OdBn0… CIA Mind Control https://www.cia.gov/library/readingro… Forgot To Include RFID Vaccine Needles https://www.dcvmn.org/IMG/pdf/2019_ro…

Did you know NZ is a leader in governmental use of AI?… there’s been a call for an independent regulator here to monitor & address the associated risks of the tech

From theconversation.com

New Zealand is a leader in government use of artificial intelligence (AI). It is part of a global network of countries that use predictive algorithms in government decision making, for anything from the optimal scheduling of public hospital beds to whether an offender should be released from prison, based on their likelihood of reoffending, or the efficient processing of simple insurance claims.

But the official use of AI algorithms in government has been in the spotlight in recent years. On the plus side, AI can enhance the accuracy, efficiency and fairness of day-to-day decision making. But concerns have also been expressed regarding transparency, meaningful human control, data protection and bias.

In a report released today, we recommend New Zealand establish a new independent regulator to monitor and address the risks associated with these digital technologies.


Read more: To protect us from the risks of advanced artificial intelligence, we need to act now


AI and transparency

There are three important issues regarding transparency.

One relates to the inspectability of algorithms. Some aspects of New Zealand government practice are reassuring. Unlike some countries that use commercial AI products, New Zealand has tended to build government AI tools in-house. This means that we know how the tools work.

But intelligibility is another issue. Knowing how an AI system works doesn’t guarantee the decisions it reaches will be understood by the people affected. The best performing AI systems are often extremely complex.

To make explanations intelligible, additional technology is required. A decision-making system can be supplemented with an “explanation system”. These are additional algorithms “bolted on” to the main algorithm we seek to understand. Their job is to construct simpler models of how the underlying algorithms work – simple enough to be understandable to people. We believe explanation systems will be increasingly important as AI technology advances.

A final type of transparency relates to public access to information about the AI systems used in government. The public should know what AI systems their government uses as well as how well they perform. Systems should be regularly evaluated and summary results made available to the public in a systematic format.


Read more: Avoid the politics and let artificial intelligence decide your vote in the next election


New Zealand’s law and transparency

Our report takes a detailed look at how well New Zealand law currently handles these transparency issues.

New Zealand doesn’t have laws specifically tailored towards algorithms, but some are relevant in this context. For instance, New Zealand’s Official Information Act (OIA) provides a right to reasons for decisions by official agencies, and this is likely to apply to algorithmic decisions just as much as human ones. This is in notable contrast to Australia, which doesn’t impose a general duty on public officials to provide reasons for their decisions.

But even the OIA would come up short where decisions are made or supported by opaque decision systems. That is why we recommend that predictive algorithms used by government, whether developed commercially or in-house, must feature in a public register, must be publicly inspectable, and (if necessary) must be supplemented with explanation systems.

Human control and data protection

Another issue relates to human control. Some of the concerns around algorithmic decision-making are best addressed by making sure there is a “human in the loop,” with a human having final sign off on any important decision. However, we don’t think this is likely to be an adequate solution in the most important cases.


Read more: Automated vehicles may encourage a new breed of distracted drivers


A persistent theme of research in industrial psychology is that humans become overly trusting and uncritical of automated systems, especially when those systems are reliable most of the time. Just adding a human “in the loop” will not always produce better outcomes. Indeed in certain contexts, human collaboration will offer false reassurance, rendering AI-assisted decisions less accurate.

With respect to data protection, we flag the problem of “inferred data”. This is data inferred about people rather than supplied by them directly (just as when Amazon infers that you might like a certain book on the basis of books it knows you have purchased). Among other recommendations, our report calls for New Zealand to consider the legal status of inferred data, and whether it should be treated the same way as primary data.

Bias and discrimination

A final area of concern is bias. Computer systems might look unbiased, but if they are relying on “dirty data” from previous decisions, they could have the effect of “baking in” discriminatory assumptions and practices. New Zealand’s anti-discrimination laws are likely to apply to algorithmic decisions, but making sure discrimination doesn’t creep back in will require ongoing monitoring.

The report also notes that while “individual rights” — for example, against discrimination — are important, we can’t entirely rely on them to guard against all of these risks. For one thing, affected people will often be those with the least economic or political power. So while they may have the “right” not to be discriminated against, it will be cold comfort to them if they have no way of enforcing it.

There is also the danger that they won’t be able to see the whole picture, to know whether an algorithm’s decisions are affecting different sections of the community differently. To enable a broader discussion about bias, public evaluation of AI tools should arguably include results for specific sub-populations, as well as for the whole population.

A new independent body will be essential if New Zealand wants to harness the benefits of algorithmic tools while avoiding or minimising their risks to the public.

Alistair Knott, James Maclaurin and Joy Liddicoat, collaborators on the AI and Law in New Zealand project, have contributed to the writing of this piece.

SOURCE

https://theconversation.com/call-for-independent-watchdog-to-monitor-nz-government-use-of-artificial-intelligence-117589

https://www.biometricupdate.com/201905/academics-call-on-new-zealand-to-regulate-ai-as-brookings-issues-guidance

Image by Computerizer from Pixabay