Category Archives: AI

Did you know NZ is a leader in governmental use of AI?… there’s been a call for an independent regulator here to monitor & address the associated risks of the tech

From theconversation.com

New Zealand is a leader in government use of artificial intelligence (AI). It is part of a global network of countries that use predictive algorithms in government decision making, for anything from the optimal scheduling of public hospital beds to whether an offender should be released from prison, based on their likelihood of reoffending, or the efficient processing of simple insurance claims.

But the official use of AI algorithms in government has been in the spotlight in recent years. On the plus side, AI can enhance the accuracy, efficiency and fairness of day-to-day decision making. But concerns have also been expressed regarding transparency, meaningful human control, data protection and bias.

In a report released today, we recommend New Zealand establish a new independent regulator to monitor and address the risks associated with these digital technologies.


Read more: To protect us from the risks of advanced artificial intelligence, we need to act now


AI and transparency

There are three important issues regarding transparency.

One relates to the inspectability of algorithms. Some aspects of New Zealand government practice are reassuring. Unlike some countries that use commercial AI products, New Zealand has tended to build government AI tools in-house. This means that we know how the tools work.

But intelligibility is another issue. Knowing how an AI system works doesn’t guarantee the decisions it reaches will be understood by the people affected. The best performing AI systems are often extremely complex.

To make explanations intelligible, additional technology is required. A decision-making system can be supplemented with an “explanation system”. These are additional algorithms “bolted on” to the main algorithm we seek to understand. Their job is to construct simpler models of how the underlying algorithms work – simple enough to be understandable to people. We believe explanation systems will be increasingly important as AI technology advances.

A final type of transparency relates to public access to information about the AI systems used in government. The public should know what AI systems their government uses as well as how well they perform. Systems should be regularly evaluated and summary results made available to the public in a systematic format.


Read more: Avoid the politics and let artificial intelligence decide your vote in the next election


New Zealand’s law and transparency

Our report takes a detailed look at how well New Zealand law currently handles these transparency issues.

New Zealand doesn’t have laws specifically tailored towards algorithms, but some are relevant in this context. For instance, New Zealand’s Official Information Act (OIA) provides a right to reasons for decisions by official agencies, and this is likely to apply to algorithmic decisions just as much as human ones. This is in notable contrast to Australia, which doesn’t impose a general duty on public officials to provide reasons for their decisions.

But even the OIA would come up short where decisions are made or supported by opaque decision systems. That is why we recommend that predictive algorithms used by government, whether developed commercially or in-house, must feature in a public register, must be publicly inspectable, and (if necessary) must be supplemented with explanation systems.

Human control and data protection

Another issue relates to human control. Some of the concerns around algorithmic decision-making are best addressed by making sure there is a “human in the loop,” with a human having final sign off on any important decision. However, we don’t think this is likely to be an adequate solution in the most important cases.


Read more: Automated vehicles may encourage a new breed of distracted drivers


A persistent theme of research in industrial psychology is that humans become overly trusting and uncritical of automated systems, especially when those systems are reliable most of the time. Just adding a human “in the loop” will not always produce better outcomes. Indeed in certain contexts, human collaboration will offer false reassurance, rendering AI-assisted decisions less accurate.

With respect to data protection, we flag the problem of “inferred data”. This is data inferred about people rather than supplied by them directly (just as when Amazon infers that you might like a certain book on the basis of books it knows you have purchased). Among other recommendations, our report calls for New Zealand to consider the legal status of inferred data, and whether it should be treated the same way as primary data.

Bias and discrimination

A final area of concern is bias. Computer systems might look unbiased, but if they are relying on “dirty data” from previous decisions, they could have the effect of “baking in” discriminatory assumptions and practices. New Zealand’s anti-discrimination laws are likely to apply to algorithmic decisions, but making sure discrimination doesn’t creep back in will require ongoing monitoring.

The report also notes that while “individual rights” — for example, against discrimination — are important, we can’t entirely rely on them to guard against all of these risks. For one thing, affected people will often be those with the least economic or political power. So while they may have the “right” not to be discriminated against, it will be cold comfort to them if they have no way of enforcing it.

There is also the danger that they won’t be able to see the whole picture, to know whether an algorithm’s decisions are affecting different sections of the community differently. To enable a broader discussion about bias, public evaluation of AI tools should arguably include results for specific sub-populations, as well as for the whole population.

A new independent body will be essential if New Zealand wants to harness the benefits of algorithmic tools while avoiding or minimising their risks to the public.

Alistair Knott, James Maclaurin and Joy Liddicoat, collaborators on the AI and Law in New Zealand project, have contributed to the writing of this piece.

SOURCE

https://theconversation.com/call-for-independent-watchdog-to-monitor-nz-government-use-of-artificial-intelligence-117589

https://www.biometricupdate.com/201905/academics-call-on-new-zealand-to-regulate-ai-as-brookings-issues-guidance

Image by Computerizer from Pixabay

 

“AI is the biggest risk we face as a civilization”

They are saying in the video, AI is a risk, and yet they are not putting stops on it. Pandora’s box. Informative video.

https://www.youtube.com/watch?reload=9&v=brQPAH6Leyo&feature=share&fbclid=IwAR2ICHWATdmeUUmwEcAIkytCgEH3LPiT-deG91tq6Eevpvr5RQSV5h5gSPM

Published on Nov 19, 2018

SUBSCRIBE 2.2M
THIS Is Only The Beginning See This Before it is Deleted 2018-2019 EVENTS WORLD NEWS NASA MOON AI ELON MUSK SUBSCRIBE: https://goo.gl/zBkuyB Find more content like this on Gaia: http://bit.ly/SupportGaia

Most people don’t even realize what’s coming

Surveillance, the chip, bots, Darpa, Lockheed Martin, cyborg soldiers, artificial intelligence the future. Interesting watch.

Published on Oct 30, 2017

SUBSCRIBED 319K

Will you get lost in the Fourth Industrial Revolution? Most people I asked don’t even know what that is, but it’s happening all around us right now. This system is about technological evolution… evolving us. Please support our work on Patreon, read our goals here: https://www.patreon.com/truthstreammedia Truthstream Can Be Found Here: Website: http://TruthstreamMedia.com Minds: @InformedDissent (Join here! https://tinyurl.com/y8voad27 … we’re going to dump Facebook soon.) FB: http://Facebook.com/TruthstreamMedia Twitter: @TruthstreamNews DONATE: http://bit.ly/2aTBeeF Amazon Affiliate Link (help support TSM with every Amazon purchase, no cost to you!): http://amzn.to/2aTARRx Newsletter: http://eepurl.com/bbxcWX ~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*­~*~*~*~*~

Amazon’s AI home assistant lets slip some home truths about chemtrails

via thecontrail.com/

https://sputniknews.com/science/201804141063569835-chemtrail-conspiracy-amazon-robot/

Has Alexa, Amazons AI home assistant,  inadvertently blown the whistle on state authorities?

Chemtrails “left by aircraft are actually chemical or biological agents deliberately sprayed at high altitudes for a purpose undisclosed to the general public by government officials.” Quote Alexa AI

Amazon, having acknowledged that it was a program error, has hurried to make corrections to Alexa’s program. Now replying to the same question on chemtrails, the digital assistant hits back:

“Chemtrails refer to trails of condensation, or contrails, left by jet engine exhaust when they come into contact with cold air at high altitudes.”

Here is the original: (different video to that originally posted here, that account is gone now, I will leave their notes below however FYI).

519K subscribers
Occurred on April 9, 2018 / Nottingham, UK “I asked Alexa a question.” Watch ViralHog videos every day! Subscribe to us on YouTube: https://goo.gl/A0gBKk Like us on Facebook: https://goo.gl/XQWqJt Follow us on Instagram: https://goo.gl/NMq8dl Follow us on Twitter: https://goo.gl/pF8Xop Submit your own great video and make money: https://goo.gl/yejGkm Contact licensing@viralhog.com to license this or any ViralHog video.

Published on Apr 7, 2018

Rogue A.I. (Alexa) Tells Truth About Chemtrails April 2018 ~credits video: Robbo Da Yobbo https://www.youtube.com/channel/UCqgN… ~subscribe: http://youtube.com/ChemTruthers Spraying Biological Agents & Nano-Tech “Smart-Dust” in The Air Connected to #5G Agenda? 01. Like: http://fb.me/Stop5G 02. Follow: http://twitter.com/Stop5G 03. Join: http://fb.com/groups/Stop5G 04. Vote: http://reddit.com/user/Stop5G 05. Bookmark: http://stop5g.whynotnews.eu 06. Subscribe: http://bitchute.com/exomatrixtv You Can Research Hashtag: #Stop5G on: YouTube, Facebook, Twitter & Reddit! http://projectavalon.net/forum4/showt…


Here is the new reply by Alexa