In the face of AI exerts repeatedly predicting the rise of sex robots, it’s increasingly difficult to insist that such machines strictly belong to a far-off, dystopian future. But some robotics experts predict we’ll soon be doing far more than having sexual intercourse with machines. Instead, we’ll be making love to them—with all the accompanying romantic feelings.
At this week’s “Love and Sex with Robots” conference at Goldsmith University in London, David Levy, author of a book on human-robot love, predicted that human-robot marriages would be legal by 2050. Adrian Cheok, computing professor at City University London and director of the Mixed Reality Lab in Singapore, says the prediction is not so farfetched.
“That might seem outrageous because it’s only 35 years away. But 35 years ago people thought homosexual marriage was outrageous,” says Cheok, who also spoke at the conference. “Until the 1970s, some states didn’t allow white and black people to marry each other. Society does progress and change very rapidly.”
And though human-robot marriage might not be legal until 2050, Cheok believes humans will be living with robot partners long before then.
Last year around this time, CIA stood up its first new office since 1963—the Directorate for Digital Innovation—a seismic shift for the agency that legitimized the importance of technology, including big data and analytics.
According to Deputy Director for Digital Innovation Andrew Hallman, the man tapped by CIA Director John Brennan to run the digital wing, that digital pivot is paying off.
The agency, Hallman said, has significantly improved its “anticipatory intelligence,” using a mesh of sophisticated algorithms and analytics against complex systems to better predict the flow of everything from illicit cash to extremists around the globe. Deep learning and other forms of machine learning can help analysts understand how seemingly disparate data sets might be linked or lend themselves to predicting future events with national security ramifications.
[…] Normally, dramatic drops like this are triggered by major news events, such a declaration of war or a monumental political development. But experts say this incident was likely the result of trading algorithms that were reacting to recent comments made by French President Francois Hollande, who called for tougher Brexit negotiations.
“Apparently it was a rogue algorithm that triggered the sell-off after it picked up comments made by the French President, Francois Hollande, who said if Theresa May and [company] want hard Brexit, they will get hard Brexit,” noted Kathleen Brooks, research director at City Index.
She says that some modern algorithms trade on the back of news sites, and even on what’s trending on social media sites like Facebook and Twitter. “[A] deluge of negative Brexit headlines could have led to an algo taking that as a major sell signal for GBP,” said Brooks. “Once the pound started moving lower than more technical algos could have followed suit.”
High frequency stock trading is a form of rapid-fire trading that involves algorithms, or bots, that can make decisions on the order of milliseconds. They’re guided by factors such as time, price, some fancy math—and even headline news. Compared to these lightning-fast traders, humans are slower by an order of magnitude, which means we’re increasingly being left out of the loop. Stock trading represents the first major domain in which we’re getting AI to do most of the work, and an entirely new digital ecology is emerging.
Imagine it’s 2030, and it’s nearing time to eat dinner.
You text a grocery store where your order is taken for a pound of ground beef, a box of Hamburger Helper and maybe some lettuce and tomatoes for a salad. Possibly you want to fancy it up with a bottle of cabernet. The beef was butchered and packaged by a machine. Robots picked and processed the grapes, which where then bottled and shipped to a market by automation.
A driverless car, or possibly a drone aircraft, delivers the goods to your front door. You never see a person from the text-to-your-doorstep process.
There are maybe four or five jobs currently associated with that scene: From the grocery store clerk, to the produce person to the butcher who packaged the beef and the winery were the grapes were picked.
All those jobs could vanish in the years ahead as technology moves at lightening speed to make our lives easier. It’s hard to imagine one area, maybe motherhood excepted, where humans couldn’t be replaced by automation or at least significantly affected by technologies.
- Autonomous Tractor Concept Takes The Farmers Out Of Farming
- No Sailors Needed: Robot Sailboats Scour the Oceans for Data
- Scientists teach Disney’s autonomous robot a sense of humour so he can play with children
- Will robots be BETTER lovers than humans? Sex with machines could become addictive, warns expert
- Technology is taking jobs away from men—and reviving a pre-industrial version of masculinity
- Why An AI-Judged Beauty Contest Picked Nearly All White Winners
- Does Walmart Understaff Its Stores in Minority Communities?
Google’s Big Bet: Machine Learning, Artificial Intelligence Will Be Its Secret Sauce, Winning Formula
Built-in search, artificial intelligence, machine learning and a knowledge graph connecting billions of entities is how Google plans to ultimately compete and win in many markets where it isn’t first today.
It’s easy to note the me-too items outlined at Google I/O. Android N has a few new features, but doesn’t advance the ball that much. Mobile platforms have hit the service pack, incremental update mode. Google Home is Google’s answer to Amazon’s Echo. Google’s Assistant, Duo and Allo are all catch-up efforts. Android Wear 2.0 is gunning for Apple’s Watch OS. Virtual reality for Google is setting up for a Facebook showdown. Instant Apps were an interesting advance, but overall there wasn’t a lot of wow developments at Google I/O’s first day.
So the story is done right? Google is playing from behind and isn’t advancing the ball much.
Not so fast.
The glue for all of these play-from-behind items is artificial intelligence, context, personalization and sheer computing power.
John Horgan is a science journalist who recently spoke at the Northeast Conference on Science and Skepticism (NECSS) from May 12-15 in New York City. His speech has been republished in Scientific American:
I hate preaching to the converted. If you were Buddhists, I’d bash Buddhism. But you’re skeptics, so I have to bash skepticism.
I’m a science journalist. I don’t celebrate science, I criticize it, because science needs critics more than cheerleaders. I point out gaps between scientific hype and reality. That keeps me busy, because, as you know, most peer-reviewed scientific claims are wrong.
So I’m a skeptic, but with a small S, not capital S. I don’t belong to skeptical societies. I don’t hang out with people who self-identify as capital-S Skeptics. Or Atheists. Or Rationalists.
When people like this get together, they become tribal. They pat each other on the back and tell each other how smart they are compared to those outside the tribe. But belonging to a tribe often makes you dumber.
Here’s an example involving two idols of Capital-S Skepticism: biologist Richard Dawkins and physicist Lawrence Krauss. Krauss recently wrote a book, A Universe from Nothing. He claims that physics is answering the old question, Why is there something rather than nothing?
Krauss’s book doesn’t come close to fulfilling the promise of its title, but Dawkins loved it. He writes in the book’s afterword: “If On the Origin of Species was biology’s deadliest blow to supernaturalism, we may come to see A Universe From Nothing as the equivalent from cosmology.”
Just to be clear: Dawkins is comparing Lawrence Krauss to Charles Darwin. Why would Dawkins say something so foolish? Because he hates religion so much that it impairs his scientific judgment. He succumbs to what you might call “The Science Delusion.”
“The Science Delusion” is common among Capital-S Skeptics. You don’t apply your skepticism equally. You are extremely critical of belief in God, ghosts, heaven, ESP, astrology, homeopathy and Bigfoot. You also attackdisbelief in global warming, vaccines and genetically modified food.
These beliefs and disbeliefs deserve criticism, but they are what I call “soft targets.” That’s because, for the most part, you’re bashing people outside your tribe, who ignore you. You end up preaching to the converted.
Meanwhile, you neglect what I call hard targets. These are dubious and even harmful claims promoted by major scientists and institutions. In the rest of this talk, I’ll give you examples of hard targets from physics, medicine and biology. I’ll wrap up with a rant about war, the hardest target of all.
Not since the era of imperial Rome has the “thumbs-up” sign been such a potent and public symbol of power. A mere 12 years after it was founded, Facebook is a great empire with a vast population, immense wealth, a charismatic leader, and mind-boggling reach and influence. The world’s largest social network has 1.6 billion users, a billion of whom use it every day for an average of over 20 minutes each. In the Western world, Facebook accounts for the largest share of the most popular activity (social networking) on the most widely used computing devices (smartphones); its various apps account for 30% of mobile internet use by Americans. And it is the sixth-most-valuable public company on Earth, worth some $325 billion.
Even so, Mark Zuckerberg, Facebook’s 31-year-old founder and chief executive, has even greater ambitions (see article). He has plans to connect the digitally unconnected in poor countries by beaming internet signals from solar-powered drones, and is making big bets on artificial intelligence (AI), “chatbots” and virtual reality (VR). This bid for dominance will bring him into increasing conflict with the other great empires of the technology world, and Google in particular. The ensuing battle will shape the digital future for everyone.
- The new face of Facebook: How to win friends and influence people
- Inside Facebook’s Ambitious Plan to Connect the Whole World
- Sour grapes at Facebook over Google’s AI victory
- How Facebook destroyed the global balance of power
- Inside Mark Zuckerberg’s Bold Plan For The Future Of Facebook
- Prepare for the Global Corporate State of Facebook?
- How Facebook Could End Up Controlling Everything You Watch and Read Online
[On Wednesday] Microsoft debuted “Tay” to the world, an “artificial intelligent chat bot developed … to experiment with and conduct research on conversational understanding.” Within hours Twitter had turned the naïve AI bot into a stream of “racist, sexist, Holocaust-denying” posts covering everything from politics to race relations to attacking women. Without a protective cushion of keyword and content filters and base domain knowledge about offensive topics, the AI chatbot naively engaged with the world and innocently mimicked what it was being told, much as a human child might. Unfortunately the bot “proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.” What might we learn from this about the future of AI?
One of the greatest challenges in creating production AI comes when it moves from the controlled conditions of the lab to the great outdoors of the real world. Microsoft of course obviously never intended to create a chatbot spewing such offensive commentary to the world. The problem is that it did not anticipate what the caustic and often toxic world of social media might do and instead designed its bot with the same degree of high innocence as a human child. Much as a child innocently repeats offensive words in inappropriate contexts, Tay lacked the domain knowledge to understand what it was saying.
The age of super smart machines is no longer the stuff of science fiction. The reality of their arrival is presenting both great opportunities and profound challenges across a wide swath of our lives. Artificial Intelligence (AI) expert Roman Yampolskiy discusses the emergence of smart machines and their impact on human pursuits, work and the economy. Scott Kimel and Jenny Wu join the discussion for a broader perspective on the effects and implications of this new era. (Idea Festival)
Robots have been keeping police officers safe for years by disposing of bombs and assisting in hostage situations, but a rapid increase in technology could trigger a robot revolution over the next decade.
Robots are already being weaponized: In 2014, a South African company started selling drones that could shoot 80 pepper balls per second, and police in North Dakota have been cleared to use a type of drone that is armed with tear gas and Tasers. Police use of Tasers—they’re designed to be nonlethal but can trigger cardiac arrest—killed 540 Americans from 2001 to 2013, according to Amnesty International. Right now, this technology requires an operator to remotely control the robots and the weapons. But autonomous weaponized robots are already being used by the Israeli military to patrol that nation’s borders, and a Texas company has created a drone to hover over private property and, without human instruction, fire a Taser dart to keep a potential intruder under shock until the authorities arrive. Imagine a convergence in technology that also gives these robots facial-recognition capability. Given the right circumstances, such as a terrorist threat, these robots could be rolled out in large numbers to protect citizens.
Connected to the cloud in order to work in tandem with other robots, they would be the perfect tools to ID and track large numbers of people from afar and from the air. The threat of future attacks would make these robots hard to put away again. And don’t forget the problems with all computerized devices: They can be hacked and used against the authorities or innocent victims. They can be spoofed about their location and crash into buildings. And they have already been used to commit crimes like theft, snooping anddrug smuggling. We shouldn’t stifle the onset of new technologies that could help humanity, but we need a bill of human technological rights to ensure our individual freedoms. Otherwise, I am predicting a gradual erosion of human rights such as freedom of movement, privacy and even life.
- ‘Killer Robots’ Could Trigger New Arms Race
- Humans, Not Robots, Are the Real Reason Artificial Intelligence Is Scary
- Podcast: Noel Sharkey on the unreliability of ‘killer robots’
- Killer Robots With Automatic Rifles Could Be on the Battlefield in 5 Years
- Toy Soldiers to Killer Robots: Professor Noel Sharkey at TEDx
- U.S. military gets fired up over weaponized robots
- Drone race will ultimately lead to a sanitised factory of slaughter
- Noel Sharkey – Wikipedia
‘Disturbing news has emerged that a bullet was fired at a police helicopter during this year’s Bilderberg conference. Who would do such a thing? It turns out, the people who would do it are the police. The bullet was fired accidentally, admits a government spokesperson, by a member of Austria’s elite EKO Cobra counter-terrorism squad: the rifle went off when the officer was climbing into the helicopter.
Luckily, no one was hurt. One helicopter was slightly injured. That’s a relief, although it begs the question: what on earth is a crack anti-terror unit doing flying helicopter patrols around Telfs with live ammo and the safety off? Last time I looked we’re in the Tyrol, not Vietnam.
It seems appropriate that the single act of violence perpetrated towards police at Bilderberg 2015 was the direct result of their crazily heavy policing. In fact, the only injury the police received the entire time was when an officer sprained his lips from going “brrrm brrrm brrrm” while sitting in their redundant armoured personnel carrier and pretending to drive it.
The police won’t spend money on a much-needed press accreditation centre here, but they’re happy to splash out on an armour-plated snowplough. What a joke.’
- The Bankers of Bilderberg: The hills are alive with the sound of money
- Bilderberg 2015: TTIP and a travesty of transparency
- Koç pops up at Bilderberg: could this be the year to let it all hang out?
- The continual police checks are ruining my Bilderberg party
- At the G7, we journalists were pampered – at Bilderberg we’re harassed by police
- Forget the G7 summit – Bilderberg is where the big guns go
‘Robots are about to get a lot more personal.
Whether in the home or the workplace, social robots are going to become a lot more commonplace in the next few years. Or at least that is what the companies bringing these personalized machines to market are hoping.’
‘This report has, to the best of the authors’ knowledge, created the first list of global risks with impacts that for all practical purposes can be called infinite. It is also the first structured overview of key events related to such risks and has tried to provide initial rough quantifications for the probabilities of these impacts.
With such a focus it may surprise some readers to find that the report’s essential aim is to inspire action and dialogue as well as an increased use of the methodologies used for risk assessment.
The real focus is not on the almost unimaginable impacts of the risks the report outlines. Its fundamental purpose is to encourage global collaboration and to use this new category of risk as a driver for innovation.
The idea that we face a number of global risks threatening the very basis of our civilisation at the beginning of the 21st century is well accepted in the scientific community, and is studied at a number of leading universities. But there is still no coordinated approach to address this group of risks and turn them into opportunities.’
This is a predatory model where you are the prey and consumer products are the bait.
Once a Spyware 2.0 company like Google has convinced you to use their products, they proceed to watch everything you do. Their goal is to learn as much about you as they can.
You are the lab rat.
They study you because the insight they gain about you is the value they sell to their customers.
Selling people is not an entirely new business model. There was once a very financially-rewarding global business built on selling people’s bodies.
We called it slavery.
Today, we frown upon that particular practice in polite company. It’s about time to ask ourselves, however, what are we to call the business of selling everything about a person that makes them who they are apart from their body?
If the question makes you feel uncomfortable, good.
If just thinking about it makes you feel uncomfortable, imagine how living within a system where this business model is a monopoly will make you feel. Then imagine what a society shaped by its ramifications will look like. Imagine its effects on equality, human rights, and democracy.
You don’t have to try too hard to imagine any of this because we are already living in the early days of just such a world today.
And yet it’s still early enough that I’m hopeful we can challenge the unfettered progress of this Silicon Valley model that is toxic to our human rights and threatens the very pillars of democracy itself.’
- Elon Musk and Other Visionaries Are Worried About the Future of AI
- How the Pentagon’s Skynet Would Automate War
- The rise of the transhumanist movement
- Is Artificial Intelligence a Threat?
- The five biggest threats to human existence
- DARPA spent over $1 billion trying to build Skynet in the 1980s
- Now The Military Is Going To Build Robots That Have Morals
- Introducing AISight: The slightly scary CCTV network completely run by AI
- The Inevitable Connection Between Artificial Intelligence and Surveillance
- New DARPA Office Merges Biology And Technology
- Our Final Invention: Artificial Intelligence and the End of the Human Era (Book)
‘Pentagon officials are worried that the US military is losing its edge compared to competitors like China, and are willing to explore almost anything to stay on top—including creating watered-down versions of the Terminator.
Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.
Last week, US defense secretary Chuck Hagel announced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “military-technological superiority.” Areas to be covered by the DoD programme include robotics, autonomous systems, miniaturization, Big Data and advanced manufacturing, including 3D printing.
But just how far down the rabbit hole Hagel’s initiative could go—whether driven by desperation, fantasy or hubris—is revealed by an overlooked Pentagon-funded study, published quietly in mid-September by the DoD National Defense University’s (NDU) Center for Technology and National Security Policy in Washington DC.’
- Hagel Announces New Defense Innovation, Reform Efforts
- Defense Secretary: U.S. needs “game-changing” military technologies to offset more muscular Russia and China
- Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions
- US Defense Secretary Chuck Hagel: New World Order Means Endless War
- DARPA spent over $1 billion trying to build Skynet in the 1980s
- Elon Musk worries Skynet is only five years off
- Elon Musk Is Not Alone In His Belief That Robots Will Eventually Want To Kill Us All
- 20YY: Preparing for War in the Robotic Age
- Supercomputer models one second of human brain activity
- Darpa, Venter Launch Assembly Line for Genetic Engineering
- Pentagon preparing for mass civil breakdown
- Barack Obama’s Secret Terrorist-Tracking System, by the Numbers
- Pentagon Funds New Data-Mining Tools To Track and Kill Activists
- “Civilian casualties” authorized under secret US drone-strike memo
- Obama’s drone war kills ‘others,’ not just al Qaida leaders
- Living Under Drones: The Numbers
- The U.S. Navy’s First Laser Cannon Is Now Deployed in the Persian Gulf
- CIA Chief: We’ll Spy on You Through Your Dishwasher
- Full-spectrum dominance
‘Whatever you think of transhumanism, one thing is quite certain: the transhumanist movement is alive, healthy and growing. In any ordinary week in the world of bioethics, several articles will be published exploring one aspect or other of transhumanism.
Consider, for example, Zoltan Istvan, best-selling author and self-proclaimed “transhumanist visionary”. Istvan has published 20 articles this year in the Huffington Post on transhumanism. He recently announced that he intends to run as a representative of the Transhumanist Party in the 2016 US presidential elections.
There is also a fully-fledged international transhumanist society, Humanity +. The organisation, founded in 1998, runs seminars around the world to discuss the latest developments in human enhancement technologies. The also organisation publish the online quarterly Humanity + magazine, a publication dedicated to discussing transhumanist news and ideas.
In a recent blog post, Wesley Smith argued that the transhumanist vision was a mere ‘utopian fantasy land’. A small army of transhumanist supporters came to the support of the movement, commenting extensively on the article and criticising Smith’s argument. ‘
- Should a Transhumanist Run for US President?
- Transhumanism’s Utopian Fantasy Land
- The Biggest Worry About Transhumanism
- Human thoughts used to switch on genes
- ‘Transhumanists’ are planning to upload your mind to a memory stick…
- A transhumanist wants children to know that Death Is Wrong
- U.S. spy agency predicts a very transhuman future by 2030
- Francis Fukuyama: Transhumanism
- The Singularity Is Near (Book)
‘When the world ends, it may not be by fire or ice or an evil robot overlord. Our demise may come at the hands of a superintelligence that just wants more paper clips.
So says Nick Bostrom, a philosopher who founded and directs the Future of Humanity Institute, in the Oxford Martin School at the University of Oxford. He created the “paper-clip maximizer” thought experiment to expose flaws in how we conceive of superintelligence. We anthropomorphize such machines as particularly clever math nerds, says Bostrom, whose book Superintelligence: Paths, Dangers, Strategies was released in Britain in July and arrived stateside this month. Spurred by science fiction and pop culture, we assume that the main superintelligence-gone-wrong scenario features a hostile organization programming software to conquer the world. But those assumptions fundamentally misunderstand the nature of superintelligence: The dangers come not necessarily from evil motives, says Bostrom, but from a powerful, wholly nonhuman agent that lacks common sense.’
- You Should Be Terrified of Superintelligent Machines
- This Is What It Will Look Like When Robots Take All Our Jobs
- Elon Musk: Artificial intelligence will be ‘more dangerous than nukes’
- IBM’s new supercomputing chip mimics the human brain with very little power
- Intelligent Machines Scare Smart People
- Stephen Hawking: Are we taking AI seriously enough?
- But What Would the End of Humanity Mean for Me?
- Advances in artificial intelligence could lead to mass unemployment, warn experts
‘In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks, that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.
These risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).
If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.
With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final.’
Editor’s Note: Adam Curtis is a documentary film maker who focusses on “power and how it works in society“. His films include ‘The Power of Nightmares‘ and ‘The Century of Self’ among many others. Watch them, watch them all.
‘If you are an American politician today, as well as an entourage you also have a new, modern addition. You have what’s called a “digital tracker”. They follow you everywhere with a high-definition video camera, and they are employed by the people who want to destroy your political career.
It’s called “opposition research” and the aim is to constantly record everything you say and do. The files are sent back every night to large anonymous offices in Washington where dozens of researchers systematically compare everything you said today with what you said in the past.
They are looking for contradictions. And if they find one – they feed it, and the video evidence, to the media.
On one hand it’s old politics – digging up the dirt on your opponent. But it is also part of something new – and much bigger than just politics. Throughout the western world new systems have risen up whose job is to constantly record and monitor the present – and then compare that to the recorded past. The aim is to discover patterns, coincidences and correlations, and from that find ways of stopping change. Keeping things the same.
We can’t properly see what is happening because these systems are operating in very different areas – from consumerism, to the management of your own body, to predicting future crimes, and even trying to stabilise the global financial system – as well as in politics.
But taken together the cumulative effect is that of a giant refrigerator that freezes us, and those who govern us, into a state of immobility, perpetually repeating the past and terrified of change and the future.’
- Opposition research
- BlackRock’s Aladdin: genie not included
- BlackRock: The monolith and the markets
- The rise of BlackRock
- BlackRock Inc.
- Duncan Campbell’s ‘The Secret Society’ (1987)
- Boolean algebra
- The Fourth Dimension by Charles Howard Hinton (1912)
- Wilfrid Michael Voynich
- The Gadfly by E.L. Voynich
- The riddle of the Voynich Manuscript
‘From 1983 to 1993 DARPA spent over $1 billion on a program called the Strategic Computing Initiative. The agency’s goal was to push the boundaries of computers, artificial intelligence, and robotics to build something that, in hindsight, looks strikingly similar to the dystopian future of the Terminator movies. They wanted to build Skynet.
Much like Ronald Reagan’s Star Wars program, the idea behind Strategic Computing proved too futuristic for its time. But with the stunning advancements we’re witnessing today in military AI and autonomous robots, it’s worth revisiting this nearly forgotten program, and asking ourselves if we’re ready for a world of hyperconnected killing machines. And perhaps a more futile question: Even if we wanted to stop it, is it too late?’
‘While Internet trolls and members of Congress wage war over edits on Wikipedia, Swedish university administrator Sverker Johansson has spent the last seven years becoming the most prolific author…by a long shot. In fact, he’s responsible for over 2.7 million articles or 8.5% of all the articles in the collection, according to The Wall Street Journal. And it’s all thanks to a program called Lsjbot.
Johansson’s software collects info from databases on a particular topic then packages it into articles in Swedish and two dialects of Filipino (his wife’s native tongue). Many of the posts focus on innocuous subjects — animal species or town profiles. Yet, the sheer volume of up to 10,000 entries a day has vaulted Johansson and his bot into the top leaderboard position and hence, the spotlight.’
He echoed the words of Peter Diamandis, who says that we are moving from a history of scarcity to an era of abundance. Then he noted that the technologies that make such abundance possible are allowing production of far more output using far fewer people.
On all this, Summers is right. Within two decades, we will have almost unlimited energy, food, and clean water; advances in medicine will allow us to live longer and healthier lives; robots will drive our cars, manufacture our goods, and do our chores.
There won’t be much work for human beings…’
‘It happens quickly—more quickly than you, being human, can fully process. A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.
This, roughly speaking, is the problem presented by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University. In a recent opinion piece for Wired, Lin explored one of the most disturbing questions in robot ethics: If a crash is unavoidable, should an autonomous car choose who it slams into?”
- The Robot Car of Tomorrow May Just Be Programmed to Hit You
- UN Considers Banning Killer Robots
- U.N. debates future ban on killer robots
- Cheetah robot ‘runs faster than Usain Bolt’
- Human Rights Watch: Ban “Terminator” Robots Before We Lose Control
- The Rise of the Machines: Why Increasingly “Perfect” Weapons Help Perpetuate our Wars and Endanger Our Nation
‘Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations. The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.
“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”’
- United Nations to Debate ‘Should We Ban Killer Robots?’
- ‘Killer robots’ to be debated at UN
- Why There Will Be A Robot Uprising
- Every Country Will Have Armed Drones Within Ten Years
- April 2013 report to the U.N. on Lethal autonomous robotics (LARs)
- Reigning in the Killer Robot? The DoD’s Directive on Autonomous Weapons
- DoD Directive: Autonomy in Weapon Systems
- Moral Machines: Teaching Robots Right from Wrong (Book)
- Governing Lethal Behavior in Autonomous Robots (Book)
- Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture
Imagine a major city completely covered by a video surveillance system designed to monitor the every move of its citizens. Now imagine that the system is run by a fast-learning machine intelligence, that’s designed to spot crimes before they even happen. No, this isn’t the dystopian dream of a cyber-punk science fiction author, or the writers of TV show “Person of Interest”. This is Boston, on the US East Coast, and it could soon be many more cities around the world.
In the aftermath of the Boston Marathon Bombings in April of last year, as law enforcement and the world’s media struggled to make sense of the tragedy, the Boston Police Department contacted a company well-known for developing innovative and cutting-edge surveillance technology based on advanced artificial intelligence. Behavioral Recognition Systems, Inc. (BRS Labs) is a software development company based out of a nondescript office block in Houston Texas, with the motto: “New World. New security.”
“Artificial intelligence is already in use across surveillance networks around the world. At high security sites like prisons, nuclear facilities or government agencies, it’s commonplace for security systems to set up a number of rules-based alerts for their video analytics. So if an object on the screen (a person, or a car, for instance) crosses a designated part of the scene, an alert is passed on to the human operator. The operator surveys the footage, and works out if further action needs to be taken… BRS Labs’ AISight is different because it doesn’t rely on a human programmer to tell it what behaviour is suspicious. It learns that all by itself.”
The Defense Advance Research Program Agency (DARPA) has created a division that merges biology, engineering, and computer science to advance technologies for national security.
The goal of the Biological Technologies Office (BTO) is to develop next-generation systems that are inspired by the life sciences. Biology is among the core sciences that represent the future of defense technology, DARPA said. The BTO will expand on the work already carried out by DARPA’s Defense Sciences (DSO) and Microsystems Technology (MTO) Offices, particularly in disciplines such as neuroscience, sensor design, and microsystems.