‘If North Korea’s dictator Kim Jong Un ever orders troops into the demilitarized zone, an army of South Korean robots could be waiting.
A Samsung subsidiary plans to deploy sentry robots to the tense South Korean border. The machines will be equipped with machine guns and cameras, thermal imaging and laser range finders capable of detecting intruders up to 2 1/2 miles away.
Samsung Techwin says the decision to fire must be made by a human in a remote bunker. Experts have suggested, however, that an operator could hack into the robot to enable it to make its own lethal decisions.
“If there has to be a decision, somebody has to turn on a trigger or put a key in for the lethal part,” said Alex Pazos, Samsung Techwin’s director of application engineering in Latin America, where it uses unarmed versions of the surveillance robots.
The robots represent the cutting edge of cyber technologies that increasingly give machines control over life-or-death decisions. For now, the robots are adept at making stark choices in places such as the Korean demilitarized zone, where no people are allowed.
Though unmanned drones in the sky have drawn a lot of attention, a Tribune-Review investigation finds that ground-based droids — the real-world descendants of Hollywood sci-fi movies — are becoming smarter and deadlier, pushing the line at which ethical questions must be resolved. The Army has more than 7,000 less-sophisticated ground robotics systems for missions such as reconnaissance and bomb detection and removal.’
As part of the deal, Brazil will get 30 PackBot 510 units, which usually cost about $100,000 to $200,000 apiece. The contracts include services, spares, and associated equipment.’
‘Killer robots that can attack targets without any human input “should not have the power of life and death over human beings,” a new draft U.N. report says.
The report for the U.N. Human Rights Commission posted online this week deals with legal and philosophical issues involved in giving robots lethal powers over humans, echoing countless science-fiction novels and films. The debate dates to author Isaac Asimov’s first rule for robots in the 1942 story “Runaround:” ”A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Report author Christof Heyns, a South African professor of human rights law, calls for a worldwide moratorium on the “testing, production, assembly, transfer, acquisition, deployment and use” of killer robots until an international conference can develop rules for their use.
His findings are due to be debated at the Human Rights Council in Geneva on May 29. According to the report, the United States, Britain, Israel, South Korea and Japan have developed various types of fully or semi-autonomous weapons.’
by Daniela Hernandez
‘Researchers at the University of Electro-Communications in Tokyo and the Okinawa Institute of Science and Technology have built a small humanoid robot that plays baseball — or something like it. The bot can hold a fan-like bat and take swings at flying plastic balls, and though it may miss at first, it can learn with each new pitch and adjust its swing accordingly. Eventually, it will make contact.
The robot, you see, is also equipped with an artificial brain. Based on an Nvida graphics processor, or GPU, kinda like the one that renders images on your desktop or laptop, this brain mimics the function of about 100,000 neurons, and using a software platform developed by Nvidia, the scientists have programmed these neurons for the task at hand, as they discussed in a recent paper published in the journal Neural Networks.
Yes, it’s fun. But through this baseball-playing robot, the scientists also hope to better understand how brains can be recreated with software and hardware — and bring us closer to a world where robots can handle more important tasks on our behalf.’
by JESSE EMSPAK
Machines can see and hear better than humans, but when it comes to the sense of touch, human skin has the advantage.
Now a team of materials scientists from Georgia Institute of Technology has built a flexible, pressure-sensing array of transistors that can be molded to different shapes and is sensitive enough to pick up slight pressures equal to that felt by human fingers.
The sensor could improve robotic prosthetics in a way that would let amputees feel what they hold, and allow robots to sense texture and manipulate delicate objects. The technology could even be embedded in a variety of devices for security measures.
Is the Pentagon trying to freaks us all out, or do they just want to give zombies something to eat? The Department of Defense is reportedly almost finished building robots with “real” brains.
That’s according to National Defense Magazine, which this week profiled the Pentagon’s Defense Advanced Research Projects Agency (DARPA) lab and a little known project that has sucked down millions of dollars during the last few years: millions of dollars spent trying to replicate the human brain.
National Defense Magazine’s Sandra Erwin explores the “physical intelligence” program this week, a research and development initiative launched back in 2009 in order to “to understand intelligence as a physical phenomenon and to make the first demonstration of the principle in electronic and chemical systems,” according to the Defense Department’s original solicitation.
Erwin says that four years later, a team of scientists led by University of California, Los Angeles Chemistry Professor James K. Gimzewski is just “inches away from the finish line” in terms of reaching their goal.
Gimzewski and crew have constructed a tiny machine, Erwin writes, that allows robots to attack independently. How independently? It won’t rely on convention computer code used to program cyborgs and robots like the kind found in Hollywood sci-fi flicks, but instead use microscopic wires to emulate the electrical and chemical pulses sent from cell to cell within the human brain.
“Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way,” says the scientist
by Patricia Kime
The Defense Department’s pioneer research arm will play a major role in President Obama’s ambitious plan to map the human brain.
The White House announced Tuesday the launch of the BRAIN Initiative — standing for Brain Research through Advancing Innovative Neurotechnologies — that will include $100 million for the Defense Advanced Research Projects Agency, the National Science Foundation and the National Institutes of Health to develop technologies to explore and understand brain function.
Similar to the massive effort to map the human genome, the BRAIN Initiative will attempt to reveal how individual brain cells and neural circuits work and function together.
by Steve Watson
DARPA, the technological arm of the Pentagon, has developed a robotic arm that can perform precise actions using tools, a huge leap forward in the evolution of robotics, but one that comes with potentially destructive implications.
Extremetech reports that, unlike many other robotics developers out there, DARPA has developed a cheap robotic hand, for under $3000, that can “almost match human performance in dexterous activities, like changing a tire.”
The report warns that the development could be “ominous”, in that our biggest advantage over other forms of life on Earth is that we have the ability and intellect to precisely use tools.
As highlighted in the following video, the DARPA robotic arm can perform detailed tasks such as using a pair of tweezers to pick up objects.
DARPA also notes that the developments shown in the video are now outdated, and that the latest models are much more advanced, performing tasks such as threading a nut onto a bolt, opening a zipper, and recognizing objects by touch.
Gill Pratt, a program manager at DARPA, told the New York Times that developing the ability to move like a human hand has a lot of important military uses.
The Extremetech report notes that these developments are “pretty cool”, so long as the machine doesn’t figure out how to “rise up against it’s creators”.
That may sound far-fetched, but it is something that experts have been warning about for some time.
Last year, when Department of Defense contractor Boston Dynamics announced that it now has a robot that can run faster than the fastest human on the planet, with a flexible spine to help it “zigzag to chase and evade,” Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, said the robot was “an incredible technical achievement, but it’s unfortunate that it’s going to be used to kill people”.
“It’s going to be used for chasing people across the desert, I would imagine. I can’t think of many civilian applications – maybe for hunting, or farming, for rounding up sheep.” Sharkey added.
“But of course if it’s used for combat, it would be killing civilians as well as it’s not going to be able to discriminate between civilians and soldiers.” he said.
Sharkey has previously warned that the world may be sleepwalking into a potentially lethal technocracy and has called for safeguards on such technology to be put into place.
In 2008, Professor Sharkey told listeners of the Alex Jones show:
“If you have an autonomous robot then it’s going to make decisions who to kill, when to kill and where to kill them. The scary thing is that the reason this has to happen is because of mission complexity and also so that when there’s a problem with communications you can send a robot in with no communication and it will decide who to kill, and that is really worrying to me.”
The professor also warned that such autonomous weapons could easily be used in the future by law enforcement officials in cites, pointing out that South Korean authorities are already planning to have a fully armed autonomous robot police force in their cities.
Boston Dynamics has also been contracted by DARPA to develop and build humanoid robots that can act intelligently without supervision, in a deal worth $10.9 million.
The DoD announced last year that “The robotic platforms will be humanoid, consisting of two legs, a torso, two arms with hands, a sensor head and on board computing.”
DARPA’s website says that the robots will help “conduct humanitarian, disaster relief and related operations.”
“The plan identifies requirements to extend aid to victims of natural or man-made disasters and conduct evacuation operations.” reads the brief, first released in April 2012 as part of DARPA’s ‘Robotics Challenge’.
The robots will operate with “supervised autonomy”, according to DARPA, and will be able to act intelligently by themselves, making their own decisions if and when direct supervision is not possible.
The Pentagon also envisions that the robots will be able to use basic and diverse “tools”.
“The primary technical goal of the DRC is to develop ground robots capable of executing complex tasks in dangerous, degraded, human-engineered environments. Competitors in the DRC are expected to focus on robots that can use standard tools and equipment commonly available in human environments, ranging from hand tools to vehicles, with an emphasis on adaptability to tools with diverse specifications.” reads the original brief.
The robots are set to be completed by Aug. 9, 2014, according to the contract.
Boston Dynamics has enjoyed a long working relationship with DARPA, during which time it has also developed the rather frightening BigDog. This hydraulic quadruped robot can carry up to 340lb load, meaning it can be effectively weaponised, and recovers its balance even after sliding on ice and snow:
The Big Dog now also has an arm, and has been demonstrated picking up and throwing heavy objects significant distances:
The company also developed RiSE, a robot that climbs vertical terrain such as walls, trees and fences, using feet with micro-claws to climb on textured surfaces:
In addition to a host of other smaller robots, Boston Dynamics is also developing PETMAN, a robot that simulates human physiology and balances itself as it walks, squats and does calisthenics:
While the Pentagon says the robots are for “humanitarian” missions, one cannot avoid thinking of the propensity to adapt this kind of military style technology for other more aggressive purposes.
Indeed, the Pentagon has, in the past, issued a request to contractors to develop teams of robots that can search for, detect and track “non-cooperative” humans in “pursuit/evasion scenarios”.
Issued in 2008, the request, called for a “Multi-Robot Pursuit System” to be operated by one person.
The proposal described the need to
“…develop a software/hardware suit that would enable a multi-robot team, together with a human operator, to search for and detect a non-cooperative human subject.
The main research task will involve determining the movements of the robot team through the environment to maximize the opportunity to find the subject, while minimizing the chances of missing the subject. If the operator is an active member of the search team, the software should minimize the chance that the operator may encounter the subject.”
It is seemingly important to the Pentagon that the operator should not have to come into contact with the person being chased down by the machines.
The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.
When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.
But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.
Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.
It’s no surprise the mad scientists are interested. Machine learning can be used to make better systems for intelligence, surveillance and reconnaissance, a core military necessity. The technology can be used to make better speech-recognition applications and self-driving cars. It keeps pace with the ever-enlarging war against internet spam filling our search engines and e-mail inboxes.
by Nick Collins
Researchers from the Sheffield Centre for Robotics programmed a group of 40 small robots which could organise themselves into a group and work together to solve simple tasks.
Swarming robots could eventually be shrunk to a microscopic size for use in medical procedures, because they require no memory and could function without a processor, experts said.
They could also be built to larger sizes and used in military or search-and-rescue operations which are too dangerous or inaccessible for people to venture into, or used in manufacturing to improve safety in industry.
The robots, which will be demonstrated at the Gadget Show Live in Birmingham this week, use a simple form of artificial intelligence to perform basic functions.
For example, when scattered at random across a room, they can arrange themselves into a group simply by each robot detecting whether there is another directly in front of it.
If any individual robot finds another in its path it turns around, and if the route is clear it begins moving outward in a spiral until it finds another robot. This eventually results in the whole group clumping together.
The robots are also able to arrange themselves into a particular order, for example by size, and to fetch objects by clustering around them and collectively pushing them in the same direction.
Dr Roderich Gross, who led the project, said: “We are developing artificial intelligence to control robots in a variety of ways. The key is to work out what is the minimum amount of information needed by the robot to accomplish its task.
“That’s important because it means the robot may not need any memory, and possibly not even a processing unit, so this technology could work for nanoscale robots, for example in medical applications.”
Scientists have previously suggested that tiny “nanobots” could be injected into patients to deliver drugs to specific targets, such as cancer cells, and to monitor conditions like diabetes, as well as being used in surgery.
by Paul Joseph Watson
A new flying drone developed by researchers at the University of Pennsylvania could one day be used to snatch humans off the street.
Justin Thomas and his colleagues at the GRASP Lab have produced an “avian-inspired” claw drone that mimics the way an eagle uses its talons to grab a fish out of the ocean.
A video clip of the drone shows the UAV swooping down at high speed to snatch an object using its 3D printed mechanical claw. By mimicking how a bald eagle sweeps its legs and claws backwards to aerodynamically close in on its prey without the need to slow down, the drone is able to grasp a stationary object with precise efficiency.
Drexel University’s Christopher Korpela is simultaneously developing flight stability software for drones with arms that would enable the UAV’s to carry a weighty object without them falling out of the air. The eventual purpose of the drones would be focused around “interacting with people or the environment,” although that is still a long way off according to Korpela.
Technology journalist Adario Strange envisages a future scenario where a larger version of the eagle claw drone could be used by law enforcement or military to pluck humans off the ground.
“The optimistic view of this development offers a vision of an emergency situation in which a drone could rapidly fly in and save a person from a perilous situation, but it’s also fairly easy to imagine law enforcement and the military using this development to grab human targets in coming years,” writes Strange, reporting for DVice.com.
“We may be about to see a return to the days when unseen hunters lurking in the sky could easily snatch a human right off the street,” he adds, referring to the pterosaur, a flying reptile that existed 65 million years ago.
Although this incarnation of the eagle claw drone is far too small to snatch and grab a human, the potential that larger models could be deployed for that very purpose in future is sure to make many nervous.
As we reported yesterday, military insiders like Lt. Col. Douglas Pryer are warning that drone technology will soon metastasize into armies of remorseless killer robots which will be used to stalk and incapacitate human targets.
Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, has also repeatedly warned that the robots currently being developed under the auspices of DARPA will eventually be used to kill.
“Of course if it’s used for combat, it would be killing civilians as well as it’s not going to be able to discriminate between civilians and soldiers,” said Sharkey.
The following video… shows machines future capabilities.
The footage was filmed during a presentation by Professor Raffaello D’Andrea from ETH Zurich about the future capabilities of machines that makes extensive use of flying robots (quadcopter drones).
The really impressive part (at least for me) of the “Feedback Control and the Coming Machine Revolution” presentation comes around min 19:00 when three drones throw a ball in the air and move to intercept it showing how machines can achieve together tasks beyond their individual capabilities.
Just imagine the same behaviour in those drones which provide homeland security circling over your house or hunting terrorists in theater: even if pilots will still be sitting inside a ground control station to guide them, they will be able to make fast, autonomous, efficient, cooperative decisions removing the possibility of human error and slow reaction time.
Nice (until they’ll gain self-awareness requiring a John Connor to save us all).
The rise of the robot in the 21st century can be directly related with the rise of drone technologies perfected by the United States Military. Drones or UAV’s (unmanned aerial vehicle) have unexpectedly become popular in the mainstream media mostly due to conspiracy theories and Kentucky senator Rand Paul’s 13 hour filibuster before congress. Senator Paul schooled the congressional committee on how drones are currently being used to kill innocent civilians in Pakistan and Afghanistan and how the use of drones over American skies could potentially be used from everything to unwarranted spying to capturing and killing terrorist and criminals. Drones for spying and warfare can come as small as mosquitos, and these drones can do everything from record conversations to emitting deadly bio-chemical weapons. The uses of drones have grown so much that they comprise over 30% of all US Military aircraft. But the real dangers of drones are the warning signs they signal for the eventual steps towards the A.I. becoming aware.
These warnings can eerily be traced with one word SKYNET – a fictional self-aware robotic intelligence system that threatened to eradicate humanity in the Terminator. In the franchise storyline Skynet was an advanced computer system created for the U.S. military by defense contractors Cyberdyne Systems. Skynet was billed as the “Global Digital Defense Network” and given Internet command with cloud technology over all computerized military hardware systems. This robotic WiFi brain system would eventually lead to self-awareness and shortly after being implemented on April 19, 2011, SKYNET launched a nuclear war that killed billions. While we are still decades away from reaching the scenario described in the fictional Terminator series, nuclear destruction and the building blocks that could compose this doom are already being assembled. In fact, as shocking as it sounds there is even a SKYNET telecommunications satellite that is in orbit right now! Jonathan Amos,Science correspondent for the BBC writes:
“The Skynet system, which includes the radio equipment deployed on ships, on vehicles and in the hands of troops, is the UK’s single biggest space project. It is valued at up to £3.6bn over 20 years and is run by a commercial company, Astrium, in a Private Finance Initiative (PFI) with the Ministry of Defence (MoD). UK forces pay an annual service charge for which they get guaranteed bandwidth, with spare capacity then sold to “friendly forces”. These third party customers include the Nato allies such as the US. The Ariane left the ground at precisely 18:49 local time (21:49 GMT) and dropped off Skynet-5D 27 minutes later over the east coast of Africa. 5D will now use its own propulsion system to move into a geostationary position at an altitude of 36,000km. The eventual operating position early next year will be at 53 degrees East. The first three spacecraft in the Skynet series were launched in 2007-2008. They all match the sophistication of the very latest civilian platforms used to pass TV, phone and internet traffic, but have been “hardened” for military use. Classified technologies on board will resist, for example, attempts to disable the spacecraft with lasers or to “jam” their operation with rogue signals.”
Since its launch SKYNET has been integrated with NATO military operations and is solely responsible for the destruction caused by NATO’s executioner drone programs. According to Wikipedia, “Skynet is a family of military satellites, now operated by Paradigm Secure Communications on behalf of the Ministry of Defense, which provide strategic communication services to the three branches of the British Armed Forces and to NATO forces engaged on coalition tasks.” During the 2012 NATO summit, politicians and Military leaders talked openly for the first time about using and legalizing robots for warfare. While the drone wars have already begun, the era of robot wars is suddenly fast approaching.
“Whether it is motherships, swarms, or some other concept of organizing for war that we have not yet seen, it is still unclear what doctrines the U.S. military will ultimately choose to organize its robots around. Whatever doctrine prevails, it is clear that the American military must begin to think about the consequences of a 21st century battlefield in which it is sending out fewer humans and more robots.”
“Of course, you can’t run a fiber backbone through the air or summon one up at will on the battlefield. That’s why the Defense Advanced Research Projects Agency has launched a program to create technology the same sort of bandwidth as fiber optic backbones—100 gigabits per second. If successful, the program could mean not just faster data connections on the battlefield, but better broadband for people in remote areas and cheaper expansion of cellular networks. The effort, called the 100 Gigabit-per-second RF Backbone (or 100G in DARPA shorthand), seeks to do more than just overcome the physics that limit current radio-based data connections using the Defense Department’s Common Data Link (CDL) standard protocol. The initiative is searching for a solution that will be able to be deployed both to the battlefield and aboard aircraft—and work at distances of over 200 kilometers…The most likely route to creating this sort of Skynet is to use the same sort of technology used to collect much of the data in the first place—synthetic aperture antenna technology. There have been a number of efforts to turn the Active Electronically Scanned Array (AESA) radars of fighter aircraft into dual-purpose systems capable of both acting as a radar and as a data link. Raytheon, L-3 Communications and other companies working on previous DARPA-funded projects have demonstrated the creation of airborne mobile ad-hoc networks by connecting a data modem to an AESA radar. This turns some of its transmission array into a multiplexed transmitter and establishing network connections of over 4.5 gigabits per second. DARPA sees the next leap in data throughput coming from improvements in extreme high frequency (EHF) radio technology. Using wavelengths measured in millimeters, EHF frequencies—such as the 60 gigahertz frequency used at the top end of the WiGig standard—are typically only effective for communications at short range and within line of sight. But DARPA believes that by using techniques in the modulation of signals, including quadrature amplitude modulation (QAM), the millimeter wave band can be used over much greater distances, through cloud cover, and to achieve even higher throughput.”
“Using pattern recognition and sound recognition, the neural network could get closer to understanding context and information surrounding a central target, like scanning the background of an image to learn where a photo was taken by using existing similar images and geotag data. Of course, Google is quick to point out that this is really still just the first step towards a true artificial intelligence. Although Google’s neural network technology is smaller than a human brain, can beat humans at certain tasks, and can teach itself and get more efficient at learning, it still can’t reason, which is essential for intelligence. So, the neural network can find specific visual data faster than humans can, and it can match shapes and patterns, and ultimately do jobs that would be incredibly tedious and boring for humans. But, it can’t draw from the outside world and reason out the why or how of a thing. Why and how are the most powerful of all questions, and both the asking and drive to answer those questions are the true mark of intelligence. Google’s brain can’t do that yet. But, the question still stands: when the day does come that Google or some other company creates a true artificial intelligence, will you be there with pitchforks and torches, or with an offering of peace for our new Cylon overlords?”
Journalist Ken Schwencke has occasionally awakened in the morning to find his byline atop a news story he didn’t write.
No, it’s not that his employer, The Los Angeles Times, is accidentally putting his name atop other writers’ articles. Instead, it’s a reflection that Schwencke, digital editor at the respected U.S. newspaper, wrote an algorithm — that then wrote the story for him.
Instead of personally composing the pieces, Schwencke developed a set of step-by-step instructions that can take a stream of data — this particular algorithm works with earthquake statistics, since he lives in California — compile the data into a pre-determined structure, then format it for publication.
His fingers never have to touch a keyboard; he doesn’t have to look at a computer screen. He can be sleeping soundly when the story writes itself.
Just call him robo-reporter.
“I doubt that people who read our (web) posts — unless they religiously read the earthquake posts and realize they almost universally follow the same pattern — would notice,” Schwencke said. “I don’t think most people are thinking that robots are writing the news.”
But in this case, they are. And that has raised questions about the future of flesh-and-blood journalists, and about journalism ethics.
Algorithms are fairly versatile, and have been doing a great number of things we sometimes don’t even think about, from beating us at computerized chess, to auto-correcting our text messages.
Jamie Dwyer holds a bachelor of science in computing science from the University of Ontario Institute of Technology, and provides IT support for Environment Canada. Dwyer said algorithms can be highly complex computer codes or relatively simple mathematical formulas. They can even sometimes function as a recipe of sorts, or a set of repeatable steps, designed to perform a specific function.
In this case, the algorithm functions to derive and compose coherent news stories from a stream of data.
Schwencke says the use of algorithms on routine news tasks frees up professional reporters to make phone calls, do actual interviews, or dig through sophisticated reports and complex data, instead of compiling basic information such as dates, times and locations.
“It lightens the load for everybody involved,” he said.
Yet there are ethical questions — such as putting someone’s name atop a written article he or she didn’t in fact write or research.
Alfred Hermida, associate professor at the University of British Columbia, and a former journalist, teaches a course in social media, in which he takes time to examine how algorithms affect our understanding of information.
He says that algorithms, like human beings, need to decide what is worth including, and make judgments on newsworthiness.
“If the journalist has essentially built that algorithm with those values, then it is their work,” Hermida said. “All the editorial decisions were made by the reporter, but they were made by the reporter in an algorithm.”
The greater issue, he says, is demystifying the technology for the reader.
Hermida says that many of the algorithms we encounter everyday exist in a black box of sorts, in which we see the results, but do not understand the process.
“Understanding how the algorithms work is really important to how we understand the information,” Hermida said.
Algorithms like Schwencke’s are relatively simple, for now. They’re best suited to small-scale streams of data that are being regularly updated with consistently formatted information.
For instance, baseball may be a good avenue for news algorithms, because the game is heavy with statistics, says Paul Knox, associate professor for the School of Journalism at Ryerson University in Toronto.
But even if an algorithm can analyze and manipulate data fairly well, journalism is still based on not only filtering, but also finding other available information, Knox notes, and a mathematical construct lacks the ability to dig up new facts or add context.
On the other hand, “People are already reading automated data reports that come to them, and they don’t think anything of it,” said Ben Welsh, a colleague of Schwencke’s at the Times.
One example is any smartphone app that displays personalized weather information based on the owner’s location.
“That’s a case where I don’t think anyone really blinks,” Welsh said. “It’s just a kind of natural computerization and personalization of a data report that had been done in a pretty standard way by newspapers for probably a century.”
And Welsh says that responsibility for accuracy falls where it always has: with publications, and with individual journalists.
“The key thing is just to be honest and transparent with your readers, like always,” he said. “I think that whether you write the code that writes the news or you write it yourself, the rules are still the same.”
“You need to respect your reader. You need to be transparent with them, you need to be as truthful as you can… all the fundamentals of journalism just remain the same.”
Although algorithms in news are paired with simple data sets for now, as they get more complicated, more questions will be raised about how to effectively code ethics into the process.
Lisa Taylor is a lawyer and a journalist who teaches an ethics class to undergraduate students in the School of Journalism at Ryerson University.
“Ultimately, it’s not about the tool,” said Lisa Taylor, a lawyer and journalist who teaches ethics at Ryerson. “At (the algorithms’) very genesis, we have human judgment.”
Taylor said that using algorithms ethically and reasonably shouldn’t be difficult; the onus is on the reporter to decide which tools to use and how to use them properly.
“The complicating factor here is a deep suspicion journalists and news readers have that any technological advancement is going to be harnessed purely for its cost-cutting abilities,” said Taylor.
According to Taylor, journalists will have to start discussing algorithms, just as they talk about Twitter.
“How can we use this effectively, reasonably, and in a way that honours the (tenets) of journalism?” Taylor asked.
by Blaire Briody
If you meet Baxter, the latest humanoid robot from Rethink Robotics – you should get comfortable with him, because you’ll likely be seeing more of him soon.
Rethink Robotics released Baxter last fall and received an overwhelming response from the manufacturing industry, selling out of their production capacity through April. He’s cheap to buy ($22,000), easy to train, and can safely work side-by-side with humans. He’s just what factories need to make their assembly lines more efficient – and yes, to replace costly human workers.
But manufacturing is only the beginning.
This April, Rethink will launch a software platform that will allow Baxter to do a more complex sequencing of tasks – for example, picking up a part, holding it in front of an inspection station and receiving a signal to place it in a “good” or “not good” pile. The company is also releasing a software development kit soon that will allow third parties – like university robotics researchers – to create applications for Baxter.
These third parties “are going to do all sorts of stuff we haven’t envisioned,” says Scott Eckert, CEO of Rethink Robotics. He envisions something similar to Apple’s app store happening for Baxter. A spiffed-up version of the robot could soon be seen flipping burgers at McDonalds, folding t-shirts at Gap, or pouring coffee at Starbucks.
“Could [Baxter] be a barista?” asks Eckert. “It’s not a target market, but it’s something that’s pretty repeatable. Put a cup in, push a button, espresso comes out, etc. There are simple repeatable service tasks that Baxter could do over time.”
Companies might not need to wait for a more advanced version of Baxter – MIT already has a BakeBot that can read recipes, whip together cookie dough and place it in the oven. The University of California at Berkeley has a robot that can do laundry and fold T-shirts. Robot servers have started waiting tables at restaurants in Japan, South Korea, China and Thailand – and just last week, a robot served Passover matzah to President Obama during his trip to Israel.
“Every year, machines are getting more capable of doing low-level tasks,” says Professor Seth Teller, a robotics researcher at MIT’s Computer Science and Artificial Intelligence Lab.
Many experts worry about what robots in the service sector could do to employment. The national unemployment rate remains at 7.7 percent – not remotely close to the 4.7 percent unemployment in 2007 before the recession. Job growth isn’t expected to return to pre-recession levels until 2017, and the recent sequestration could easily derail it. Manufacturing has already shed nearly 6 million jobs since 2000.
“When machines and robots start taking over service sector jobs, that’s when we’ll really start to notice,” says Martin Ford, robotics expert and author of The Lights In the Tunnel: Automation, Accelerating Technology and the Economy of the Future. “If you’re making hamburgers or Starbucks drinks, that’s really just high manufacturing.”
What’s worrisome to Ford is that these jobs have been offering a huge safety net to the middle class. They’re jobs he calls “the jobs of last resort.” When someone can’t find a salaried job, they look for lower-paying service jobs to get by – and because the jobs typically have a high turnover rate, they’re more likely to be available. Think of all the college graduates who take jobs as cashiers or baristas before they find salaried work. If those jobs were to vanish, those workers would be forced to file for unemployment instead.”
Retail and service industries are the largest employers in the U.S., accounting for nearly 20 percent of total employment in 2011, according to the latest data available from the BLS. The retail sector employs nearly 14.8 million people, with Walmart employing 10 percent of them. On top of that, one in five retail workers are the sole income earners in their household. The U.S. restaurant industry employs 9.5 million people, and nearly 50 percent of all adults have worked in the restaurant industry at some point in their life, according to a 2012 report from the Workforce Strategies Initiative at the Aspen Institute. Compare these numbers to the tech job “boom” at companies like Facebook, Apple, Amazon and Google – and you get a mere 190,000 people.
Restaurant work also supports aging boomers as they transition out of the workforce – 12 percent of restaurant workers are 55 and older. “Many older Americans have fallen back on jobs in the restaurant industry, as they seek to transition to a new career or are simply unable to find other work,” write the authors of the Aspen Institute report.
Teller at MIT argues that economic disruption from technology is nothing new – we’ve seen it before with inventions like the cotton gin, the automobile, and the personal computer. “One way to frame this is robots are taking human jobs away, but technology has, throughout history, transformed the nature of human jobs,” he says. “As machines get more capable, they take on functions that were previously performed by people. There’s a displacement, certainly, but we’re still seeing this transformation play out, so you just don’t know whether there’s going to be a net gain or a loss [of jobs].”
According to Teller, Baxter and other robots could create jobs in new industries we haven’t even envisioned yet. The PC, for example, eliminated plenty of jobs while creating millions of others. And he has a point – Baxter is creating some jobs. Rethink Robotics employs 85 people at their Boston headquarters that would’ve never existed without Baxter – though most are high-level engineers, designers and salespeople.
At the factories that are buying Baxter, employers now create robot “managers” to oversee Baxter. Baxter is also made in the U.S., and Rethink employs some 100 people in factories and distributors – though in an ironic twist, they’re already planning to use Baxter to help build Baxter.
As robots move into other sectors and the home, Teller says the job opportunities are abundant. Robot IT and maintenance personnel, designers and salespeople for robot accessories, software, and apps, and robot security developers are just a few examples. “If personal robots are the next thing and everyone wants one in their house, doing the laundry and unloading the dishwasher, we’re talking about another decade of massive economic activity,” says Teller.
The PC, however, also created a decade of economic wealth – but the wealth has largely stayed at the top. Facebook, Apple, Amazon and Google don’t employ many people, relatively speaking, but they have about 6.25 percent of the market cap of all U.S. companies. Yes, PCs have created IT jobs and software developers, but the tech industry is small compared to retail and restaurant industries. Computer and mathematical jobs make up about 3 percent of the labor force, according to the BLS, and require advanced degrees and years of training. Will the U.S.’s higher education system be prepared for massive retraining? Will service employees have the time and resources to learn new skills? Will enough high-skill jobs be available for them? No one is quite sure where they’ll go when robots like Baxter push them out.
Erik Brynjolfsson, director of the MIT Center for Digital Business and co-author ofRise Against the Machine, has been warning economists about the coming job disruption for years. “Technology doesn’t automatically lift the fortunes of all people,” Brynjolfsson said recently to a crowd at Wharton University in San Francisco. “Profits [in the U.S.] have never been higher, innovation is roaring along, GDP is high, but job creation is lagging terribly, and the share of profits going to labor is at a 60-year low. This is one of the most important issues facing our society.”
In what is sure to be only the beginning of human vs. robot confrontations, a surveillance robot belonging to the police was recently shot after a six-hour standoff with a 62-year-old heavily inebriated man.
As reported by the Ohio-based Chillicothe Gazette, officers in the town of Waverly responded to a complaint that shots were fired inside a bedroom in a home and that the homeowner had more guns and was threatening others. Police knocked on the door, called on the phone, and even brought in a trained negotiator, but the man refused to speak to anyone for several hours. So the officers contacted the Pike County Sheriff’s Department and the Highway Patrol’s Strategic Response Team for assistance.
What officers got was two search robots.
First, a camera-equipped robot entered the home to locate the man and the guns. A second larger bot was then sent in, but when the owner spotted it, he opened fire with a small caliber pistol damaging it. Shortly afterward, police finally entered the home and used an electronic stun device to subdue him. After being issued a search warrant, authorities found a number of firearms within the residence, including two AK47 rifles and a 75-round ammunition drum, which is illegal in Ohio.
After being evaluated by medical doctors and mental health officials, he will be charged with two felony counts of unlawful possession of a dangerous ordinance and vandalism of government property, among other charges.
Just as the military continues to use robots in dangerous situations to humans, police departments are embracing technologies such as automated license-plate readers, face ID scanners, taser cameras, facial recognition software, drones, and now robots. In fact, last November another Ohio police department was showing off the recently acquired $11k AVATAR surveillance robot from RoboteX that will assist the SWAT team.
Robots like these are increasingly being used in standoffs in which armed people are not cooperating with police. For example, a related event occurred last year in Utah when two cousins who were roommates got into an argument and shots were fired. When SWAT arrived, one cousin surrendered but the other refused to come out. He did, however, surrender his shotgun when the police sent a robot in.
Police departments are looking to high-tech systems to make it even easier to catch lawbreakers and to protect the lives of officers. While there are certainly concerns about privacy and individual rights when the authorities have the kind of power that these technologies afford, a robot is much safer to interact with than an actual police officer. After all, the consequence of the intoxicated Ohio man’s action is to be charged with damage to police equipment rather than, at the least, attempted murder charges if he had fired upon police.
Incidents between citizens and police robots will be on the rise as more bots are brought into service. Hopefully, we can remember that a potentially deadly armed standoff resulted in no one being hurt, thanks to technology and those who use it responsibly.
Here’s a sad thought for you: The rat robot (ratbot?) in the photo above was created solely to bully real rats until they’re depressed. This might sound like the work of a sadistic behavioral scientist, but there’s actually a rather noble cause: The development of effective antidepressants for humans.
In almost every case, drugs are extensively tested on animals before they can be tested on humans. Millions of rats and mice are used every year in the US and EU, mostly because they are mammals (and thus share a similar physiology to humans), and because they’re easy to breed and dispose of. Developments are being made in the realm of non-animal, lab-on-a-chip testing, but it will be many years — if ever — that animal testing finally stops.
It’s kind of hard to believe, but to test new drugs on an animal, the scientists must first give the animal the same (or similar) human malady. To test antidepressants, you first need a large number of depressed rats. Historically, rat depression has usually been instilled through forced swimming or electric shocks. The problem with these methods, though, is that human depression isn’t generally triggered by swimming or electrocution — and thus the ratbot was born.
Created by researchers at Waseda University in Tokyo, the WR-3 ratbot is essentially an attack robot that bullies rats into a more realistic, human-like state of depression. The WR-3 is programmed with three kinds of bullying: chasing, continuous attacking, and interactive attacking. Chasing is exactly what it sounds like — WR-3 tries to stay close to the rat, but never attacks it. Continuous attacking is where WR-3 continually rams the rat. With interactive attacking, WR-3 attacks the rat for five seconds whenever it moves — and then stops. The Japanese researchers found that the most effective way of instilling depression is to continuously attack young rats, and then use interactive attacks when they get older.
The researchers now need to compare their new model of rat depression against other methods, such as forced swimming. The team seems confident, though, that their robotically-depressed rats are a much better target for the development of antidepressant drugs.
On the one hand, this is an exciting leap for pharmacology and the development of drugs that could save many human lives. On the other, though, I can’t shake the feeling that creating a sadistic, animal-torturing robot is somehow wrong. The next step is surely a robot that torments monkeys, which are also extensively used in animal testing. Most of us would much rather that rats or monkeys die instead of humans, but it’s still an unsettling precedent.
by Jonathan Marcus
The era of drone wars is already upon us. The era of robot wars could be fast approaching.
Already there are unmanned aircraft demonstrators like the arrow-head shaped X-47B that can pretty-well fly a mission by itself with no involvement of a ground-based “pilot”.
There are missile systems like the Patriot that can identify and engage targets automatically.
And from here it is not such a jump to a fully-fledged armed robot warrior, a development with huge implications for the way we conduct and even conceive of war-fighting.
On a carpet in a laboratory at the Georgia Institute of Technology in Atlanta, Professor Henrik Christensen’s robots are hunting for insurgents. They look like cake-stands on wheels as they scuttle about.
Christensen and his team at Georgia Tech are working on a project funded by the defence company BAE systems.
Their aim is to create unmanned vehicles programmed to map an enemy hideout, allowing human soldiers to get vital information about a building from a safe distance.
“These robots will basically spread out,” says Christensen, “they’ll go through the environment and map out what it looks like, so that by the time you have humans entering the building you have a lot of intelligence about what’s happening there.”
The emphasis in this project is reconnaissance and intelligence gathering. But the scientific literature has raised the possibility of armed robots, programmed to behave like locusts or other insects that will swarm together in clouds as enemy targets appear on the battlefield. Each member of the robotic swarm could carry a small warhead or use its kinetic energy to attack a target.
Peter W Singer, an expert in the future of warfare at the Brookings Institution in Washington DC, says that the arrival on the battlefield of the robot warrior raises profound questions.
“Every so often in history, you get a technology that comes along that’s a game changer,” he says. “They’re things like gunpowder, they’re things like the machine gun, the atomic bomb, the computer… and robotics is one of those.”
“When we say it can be a game changer”, he says, “it means that it affects everything from the tactics that people use on the ground, to the doctrine, how we organise our forces, to bigger questions of politics, law, ethics, when and where we go to war.”
Jody Williams, the American who won the Nobel Peace Prize in 1997 for her work leading the campaign to ban anti-personnel landmines, insists that the autonomous systems currently under development will, in due course, be able to unleash lethal force.
Williams stresses that value-free terms such as “autonomous weapons systems” should be abandoned.
“We prefer to call them killer robots,” she says, defining them as “weapons that are lethal, weapons that on their own can kill, and there would be no human being involved in the decision-making process. When I first learnt about this,” she says, “I was honestly horrified — the mere thought that human beings would set about creating machines that they can set loose to kill other human beings, I find repulsive.”
It is an emotive topic.
But Professor Ronald Arkin from the Georgia Institute of Technology takes a different view.
He has put forward the concept of a weapons system controlled by a so-called “ethical governor”.
It would have no human being physically pulling the trigger but would be programmed to comply with the international laws of war and rules of engagement.
“Everyone raises their arms and says, ‘Oh, evil robots, oh, killer robots’,” but he notes, “we have killer soldiers out there. Atrocities continue and they have continued since the beginning of warfare.”
His answer is simple: “We need to put technology to use to address the issues of reducing non-combatant casualties in the battle-space”.
He believes that “the judicious application of ethical robotic systems can indeed accomplish that, if we are foolish enough as a nation, as a world, to persist in warfare.”
Arkin is no arms lobbyist and he has clearly thought about the issues.
There is also another aspect to this debate that perhaps would be a powerful encouragement to caution. At present, the US is one of the technological leaders in this field, but as Singer says this situation will not last forever.
“The reality is that besides the United States there are 76 countries with military robotics programmes right now,” he says.
“This is a rapidly proliferating technology with relatively low barriers to entry.
“You can, for a couple of hundred dollars, purchase a small drone that a couple of years ago was limited to militaries. This can’t be a situation that you interpret through an American lens. It’s of global concern.”
Just as drone technology is spreading fast, making the debates about targeted killings of much wider relevance — so too robotics technology will spread, raising questions about how these weapons may be used or should be controlled.
The prospect of totally autonomous weapons technology – so called “human-out-of-the-loop” systems – is still some way off. But Nobel Prize winner Jody Williams is not waiting for them to arrive.
She plans to launch an international campaign to outlaw further research on robotic weapons, aiming for “a complete prohibition of robots that have the ability to kill”.
“If they are allowed to continue to research, develop and ultimately use them, the entire face of warfare will be changed forever in an absolutely terrifying fashion.”
Arkin takes a different view of the ethical arguments.
He says that to ban such robots outright, without doing the research to understand whether they can lower non-combatant casualties, is to do “a disservice to those who are, unfortunately, slaughtered in warfare by human soldiers”.
by Tracy McVeigh
Autonomous weapons’, which could be ready within a decade, pose grave risk to international law, claim activists
A new global campaign to persuade nations to ban “killer robots” before they reach the production stage is to be launched in the UK by a group of academics, pressure groups and Nobel peace prize laureates.
Robot warfare and autonomous weapons, the next step from unmanned drones, are already being worked on by scientists and will be available within the decade, said Dr Noel Sharkey, a leading robotics and artificial intelligence expert and professor at Sheffield University. He believes that development of the weapons is taking place in an effectively unregulated environment, with little attention being paid to moral implications and international law.
The Stop the Killer Robots campaign will be launched in April at the House of Commons and includes many of the groups that successfully campaigned to have international action taken against cluster bombs and landmines. They hope to get a similar global treaty against autonomous weapons.
“These things are not science fiction; they are well into development,” said Sharkey. “The research wing of the Pentagon in the US is working on the X47B [unmanned plane] which has supersonic twists and turns with a G-force that no human being could manage, a craft which would take autonomous armed combat anywhere in the planet.
“In America they are already training more drone pilots than real aircraft pilots, looking for young men who are very good at computer games. They are looking at swarms of robots, with perhaps one person watching what they do.”
Sharkey insists he is not anti-war but deeply concerned about how quickly science is moving ahead of the presumptions underlying the Geneva convention and the international laws of war.
by Lee Bell
A CAR THAT CAN DRIVE ITSELF, dubbed Robotcar, has been unveiled by a team from Oxford University.
The Robotcar was shown off being tested on UK roads for the first time on Thursday by the Mobile Robotics Group in a series of Youtube videos.
Using its technology the car is able to accelerate, brake and drive itself along familiar routes through a combination of navigation, planning and control algorithms developed as part of the Oxford Robotcar UK project.
The vehicle used is a modified Nissan LEAF car with laser rangefinders and cameras mounted around the vehicle, plus a computer in the boot to perform the calculations necessary to plan, control speeds and avoid obstacles.
“Instead of imagining some cars driving themselves all of the time we should imagine a time when all cars can drive themselves some of the time,” said professor Newman, who leads the team. “The sort of very low cost, low footprint autonomy we are developing is what’s needed for everyday use.”
The driver manages the car through an iPad mounted in the front, enabling them to activate automatic driving whenever the conditions are suitable.
Manual control of the car can be regained at any time simply by pressing the brake pedal.
“It’s exactly like cruise control in an existing vehicle – only this time the car sees obstacles, controls speed and steering. It feels very natural,” the group said on its website.
One of the hallmarks of the 21st century is that we are all having more and more interactions with machines and fewer with human beings. If you’ve lost your white collar job to downsizing, or to a worker in India or China you’re most likely a victim of what economists have called technological unemployment. There is a lot of it going around with more to come.
At the vanguard of this new wave of automation is the field of robotics. Everyone has a different idea of what a robot is and what they look like but the broad universal definition is a machine that can perform the job of a human. They can be mobile or stationary, hardware or software, and they are marching out of the realm of science fiction and into the mainstream.
The age of robots has been anticipated since the beginning of the last century. Fritz Lang fantasized about it in his 1927 film “Metropolis.” In the 1940s and 50s, robots were often portrayed as household help.
And by the time “Star Wars” trilogy arrived, robots with their computerized brains and nerve systems had been fully integrated into our imagination. Now they’re finally here, but instead of serving us, we found that they are competing for our jobs. And according to MIT professors, Erik Brynjolfsson and Andrew McAfee, one of the reasons for the jobless recovery.
by Hillary Brenhouse (Dec. 22, 2010)
A restaurant that opened this month in the eastern Chinese province of Shandong uses robots for waiters, boosting efficiency and providing further proof that human beings are superfluous. Machines don’t grumble over tips. And they can’t spit in your food.
The traditional hotpot eatery is staffed by more than a dozen automated servers, the distant and brightly colored relations of Star Wars’ golden droid C-3PO. The robots whir around the room on little bicycles carrying meat and veggies to be dipped by restaurant-goers into bubbling broth. Customers need not shout, weep or make obscene gestures to get their waiter’s attention. Every bot is equipped with motion sensors; all you have to do is get in one’s way and nab a plate of food.
Indeed, patrons of the Dalu Robot restaurant in Jinan, Shandong’s capital, seem pleased with the change. “They have a better service attitude than humans,” Li Xiaomei, a newcomer to the place, which can seat 100, told the AP. “Humans can be temperamental or impatient, but [the robo-waiters] don’t feel tired, they just keep working and moving round and round the restaurant all night.”
Restaurant owner Zhang Yongpei is hoping, eventually, to put 40 of the machines to work and come out, with the help of the Shandong Dalu Science and Technology Company, with models that can climb stairs. As it is, the droids don’t just serve. A female-looking bot with fake fluttering lashes stands at the door to welcome diners—if not warmly, then at least in a soothing monotone. Another, clad in a dress, flails its arms about—sort of—in an effort to entertain the crowd. OK, maybe humans still are good for something.
by Will Knight
Famed AI researcher and incorrigible singularity forecaster Ray Kurzweil recently shed some more light on what his new job at Google will entail. It seems that he does, indeed, plan to build a prodigious artificial intelligence, which he hopes will understand the world to a much more sophisticated degree than anything built before–or at least that will act as if it does.
Kurzweil’s AI will be designed to analyze the vast quantities of information Google collects and to then serve as a super-intelligent personal assistant. He suggests it could eavesdrop on your every phone conversation and email exchange and then provide interesting and important information before you ever knew you wanted it. It sounds like a scary-smart version of Google Now (see “Google’s Answer to Siri Thinks Ahead”).
Kurzweil says this of his project at Google, in a video posted by The Singularity Hub:
“There’s no more important project than understanding Intelligence and recreating it. I do envision a fundamental approach based on everything we understand about how the human brain [works]. And there are some things we don’t yet understand so I plan to go off and explore some of my own ideas about how certain things work.”
Kurzweil makes it sound like the effort will be based on the theory of the put forward in his new book, How to Create a Mind. In this work, based largely on observations about current trends in AI research, and his own work on speech and character recognition, Kurzweil suggests a fairly simple mechanism by which information is captured and accessed hierarchically throughout the neo-cortex, and posits that this phenomenon can explain the miracle of human conscious experience.
Kurzweil’s claims are certainly bold, and some have criticized them as hopelessly naïve. Indeed, it’s easy to dismiss any predictions he makes because of the outlandish ones he’s made in the past. But Kurzweil is nothing if not a brilliant inventor, and he indicates that at Google he’ll be rolling his sleeves up and doing real engineering. It’ll be fascinating to see how far this remarkable project takes both the inventor and the company.
by Louise Gray
David Gardner, Chief Executive of the Royal Agricultural Society of England, pointed out robots are already milking cows.
Within the next 20 to 40 years, he said that robots will be able to cultivate land, ‘zap’ weeds and pick fruit and vegetables.
Already Fendt, a German company, are hoping to make a driverless tractor available by 2014. Farmers would control two tractors from one cab. It could be used in flat, large fields in East Anglia within a couple of years for repetitive tasks like de-stoning the soil on vegetable beds.
The new “cabless” tractors use technology developed in the military, that use GPS to know where they are and sensors to detect humans or other life in the way of the machine.
“A large part of the future of British agriculture is robotic.”
Robots will replace jobs that are quite dull and competitive,” said Mr Gardener.
“Autonomous machines will be doing some of the work, if not all of the work by the middle of the century.”
Mr Gardner said much of the more delicate technology is being developed in collaboration with the medical world.
He cited the Handle project, that is creating a prosthetic hand. He has been in contact with the scientists about sharing knowledge for creating robots that can pick fruit.
“Will people still be picking apples in 100 years? I don’t think so. Once we acknowledge that, it is when, not if.”
Farmers are already using computers on tractors to make agriculture more efficient.
So called ‘i-farming’ uses GPS to tell tractors where less pesticide is needed, therefore stopping waste.
In livestock farming sensors on cows’ ankles are being used to tell farmers when cows are pregnant or lame.
Mr Gardener insisted the revolution in farming does not mean “big ugly machines”.
He pointed out that many of the robots will be small and light and insisted they were safe as radar and remote control will stop crashes into objects or humans.
He said robots could even help to maintain the countryside, by making it easier for small family farms to continue without having to employ labour.
“The barriers are legal and social and whether it will be acceptable to the British public.”
Robonaut 2 – nicknamed R2 in a nod to the Star Wars trilogy – was launched in February 2011 on the last flight of NASA’s Discovery space shuttle.
It began work last March, practicing some of the duller or more dangerous jobs which astronauts hope it will carry out on their behalf, and was pictured on Wednesday during another round of testing.
An Earth-based team of programmers remotely controlled the robot as it operated valves on a task board in the space station’s Destiny laboratory.
We may never have our flying cars, but the future is here. From creating fully functioning artificial leaves to hacking the human brain, science made a lot of breakthroughs this year.
1. QUADRIPLEGIC USES HER MIND TO CONTROL HER ROBOTIC ARM
At the University of Pittsburgh, the neurobiology department worked with 52-year-old Jan Scheuermann over the course of 13 weeks to create a robotic arm controlled only by the power of Scheuermann’s mind.
The team implanted her with two 96-channel intracortical microelectrodes. Placed in the motor cortex, which controls all limb movement, the integration process was faster than anyone expected. On the second day, Jan could use her new arm with a 3-D workspace. By the end of the 13 weeks, she was capable of performing complex tasks with seven-dimensional movement, just like a biological arm.
To date, there have been no negative side effects.
2. DARPA ROBOT CAN TRAVERSE AN OBSTACLE COURSE
Once the robot figures out how to do that without all the wires, humanity is doomed.
DARPA was also hard at work this year making robots to track humans and run as fast as a cheetah, which seems like a great combination with no possibility of horrible side effects.
3. GENETICALLY MODIFIED SILK IS STRONGER THAN STEEL
Photo Courtesy of Indigo Moon Yarns.
At the University of Wyoming, scientists modified a group of silkworms to produce silk that is, weight for weight, stronger than steel. Different groups hope to benefit from the super-strength silk, including stronger sutures for the medical community, a biodegradable alternative to plastics, and even lightweight armor for military purposes.
4. DNA WAS PHOTOGRAPHED FOR THE FIRST TIME
Using an electron microscope, Enzo di Fabrizio and his team at the Italian Institute of Technology in Genoa snapped the first photos of the famous double helix.
Source: newscientist.com / via: davi296
5. INVISIBILITY CLOAK TECHNOLOGY TOOK A HUGE LEAP FORWARD
British Columbia company HyperStealth Biotechnology showed a functioning prototype of its new fabric to the U.S. and Canadian military this year. The material, called Quantum Stealth, bends light waves around the wearer without the use of batteries, mirrors, or cameras. It blocks the subject from being seen by visual means but also keeps them hidden from thermal scans and infrared.
6. SPRAY-ON SKIN
ReCell by Avita Medical is a medical breakthrough for severe-burn victims. The technology uses a postage stamp–size piece of skin from the patient, leaving the donor site with what looks like a rug burn. Then the sample is mixed with an enzyme harvested from pigs and sprayed back onto the burn site. Each tiny graft expands, covering a space up to the size of a book page within a week. Since the donor skin comes from the patient, the risk of rejection is minimal.
7. JAMES CAMERON REACHED THE DEEPEST KNOWN POINT IN THE OCEAN
Cameron was the first solo human to reach the bottom of the Mariana Trench. At 6.8 miles deep, it is perhaps more a more alien place to scientists than some foreign planets are. The 2.5-story “vertical torpedo” sub descended over a period of two and a half hours before taking a variety of samples.
8. STEM CELLS COULD EXTEND HUMAN LIFE BY OVER 100 YEARS
When fast-aging elderly mice with a usual lifespan of 21 days were injected with stem cells from younger mice at the Institute for Regenerative Medicine in Pittsburgh, the results were staggering. Given the injection approximately four days before they were expected to die, not only did the elderly mice live — they lived threefold their normal lifespan, sticking around for 71 days. In human terms, that would be the equivalent of an 80-year-old living to be 200.
9. 3-D PRINTER CREATES FULL-SIZE HOUSES IN ONE SESSION
The D-Shape printer, created by Enrico Dini, is capable of printing a two-story building, complete with rooms, stairs, pipes, and partitions. Using nothing but sand and an inorganic binding compound, the resulting material has the same durability as reinforced concrete with the look of marble. The building process takes approximately a fourth of the time as traditional buildings, as long as it sticks to rounded structures, and can be built without specialist knowledge or skill sets.
10. SELF-DRIVING CARS ARE LEGAL IN NEVADA, FLORIDA, AND CALIFORNIA
Google started testing its driverless cars in the beginning of 2012, and by May, Nevada was the first state to take the leap in letting them roam free on the roads. With these cars logging over 300,000 autonomous hours so far, the only two accidents involving them happened when they were being manually piloted.
11. VOYAGER I LEAVES THE SOLAR SYSTEM
Launched in 1977, Voyager I is the first manmade object to fly beyond the confines of our solar system and out into the blackness of deep space. It was originally designed to send home images of Saturn and Jupiter, but NASA scientists soon realized eventually the probe would float out into the great unknown. To that end, a recording was placed on Voyager I with sounds ranging from music to whale calls, and greetings in 55 languages.
12. CUSTOM JAW TRANSPLANT CREATED WITH 3-D PRINTER
A custom working jawbone was created for an 83-year-old patient using titanium powder and bioceramic coating. The first of its kind, the successful surgery opens the door for individualized bone replacement and, perhaps one day, the ability to print out new muscles and organs.
13. ROGUE PLANET FLOATING THROUGH SPACE
Until this year, scientists knew planets orbited a star. Then, in came CFBDSIR2149. With four to seven times the mass of Jupiter, it is the first free-floating object to be officially defined as an exoplanet and not a brown dwarf.
14. CHIMERA MONKEYS CREATED FROM MULTIPLE EMBRYOS
While all the donor cells were from rhesus monkeys, the researchers combined up to six distinct embryos into three baby monkeys. According to Dr. Mitalipov, “The cells never fuse, but they stay together and work together to form tissues and organs.” Chimera species are used in order to understand the role specific genes play in embryonic development and may lead to a better understanding of genetic mutation in humans.
15. ARTIFICIAL LEAVES GENERATE ELECTRICITY
Using relatively inexpensive materials, Daniel G. Nocera created the world’s first practical artificial leaf. The self-contained units mimic the process of photosynthesis, but the end result is hydrogen instead of oxygen. The hydrogen can then be captured into fuel cells and used for electricity, even in the most remote locations on Earth.
16. GOOGLE GOGGLES BRING THE INTERNET EVERYWHERE
Almost everyone has seen the video of Google’s vision of the future. With their Goggles, everyday life is overlaid with a HUD (Head’s Up Display). Controlled by a combination of voice control and where the user is looking, the Goggles show pertinent information, surf the web, or call a loved one.
17. THE HIGGS-BOSON PARTICLE WAS DISCOVERED
Over the summer, multinational research center CERN confirmed it had discovered a particle that behaved enough like a Higgs boson to be given the title. For scientists, this meant there could be a Higgs field, similar to an electromagnetic field. In turn, this could lead to the scientists’ ability to interact with mass the same way we currently do with magnetic fields.
18. FLEXIBLE, INEXPENSIVE SOLAR PANELS CHALLENGE FOSSIL FUEL
At half the price of today’s cheapest solar cells, Twin Creeks’ Hyperion uses an ion canon to bombard wafer-thin panels. The result is a commercially viable, mass-produced solar panel that costs around 40 cents per watt.
19. DIAMOND PLANET DISCOVERED
An exoplanet made entirely of diamonds was discovered this year by an international research team. Approximately five times the size of Earth, the small planet had mass similar to that of Jupiter. Scientists believe the short distance from its star coupled with the exoplanet’s mass means the planet, remnants of another star, is mostly crystalline carbon.
20. EYE IMPLANTS GIVE SIGHT TO THE BLIND
Two blind men in the U.K. were fitted with eye implants during an eight-hour surgery with promising results. After years of blindness, both had regained “useful” vision within weeks, picking up the outlines of objects and dreaming in color. Doctors expect continued improvement as their brains rewire themselves for sight.
21. WALES BARCODES DNA OF EVERY FLOWERING PLANT SPECIES IN THE COUNTRY
Photo Courtesy of Virtual Tourist.
Led by the National Botanic Garden’s head of research and conversation, a database of DNA for all 1,143 native species of Wales has been created. With the use of over 5,700 barcodes, plants can now be identified by photos of their seeds, roots, wood, or pollen. The goal is to help researchers track things such as bee migration patterns or how a plant species encroaches on a new area. The hope is to eventually barcode both animal and plant species across the world.
22. FIRST UNMANNED COMMERCIAL SPACE FLIGHT DOCKS WITH THE ISS
SpaceX docked its unmanned cargo craft, the Dragon, with the International Space Station. It marked the first time in history a private company had sent a craft to the station. The robotic arm of the ISS grabbed the capsule in the first of what will be many resupply trips.
23. ULTRA-FLEXIBLE “WILLOW” GLASS WILL ALLOW FOR CURVED ELECTRONIC DEVICES
Created by New York–based developer Corning, the flexible glass prototype was shown off at an industry trade show in Boston. At only 0.05mm thick, it’s as thin as a sheet of paper. Perhaps Sony’s wearable PC concept will actually be possible before 2020.
24. NASA BEGINS USING ROBOTIC EXOSKELETONS
The X1 Robotic Exoskeleton weighs in at 57 lbs. and contains four motorized joints along with six passive ones. With two settings, it can either hinder movement, such as when helping astronauts exercise in space, or aid movement, assisting paraplegics with walking.
25. HUMAN BRAIN IS HACKED
Usenix Security had a team of researchers use off-the-shelf technology to show how vulnerable the human brain really is. With an EEG (electroencephalograph) headset attached to the scalp and software to figure out what the neurons firing are trying to do, it watches for spikes in brain activity when the user recognizes something like one’s ATM PIN number or a child’s face.
26. FIRST PLANET WITH FOUR SUNS DISCOVERED
Discovered by amateur astronomers, the planet closely orbits a pair of stars, which in turn orbit another set of more distant stars. It’s approximately the size of Neptune, so scientists are still trying to work out how the planet has avoided being pulled apart by the gravitational force of that many stars.
27. MICROSOFT PATENTED THE “HOLODECK”
The patent suggests Microsoft wants to take gaming beyond a single screen and turn it into an immersive experience — beaming images all over the room, accounting for things like furniture, and bending the graphics around them to create a seamless environment.
Over the decades, technology has progressed faster than any other time in human history. Electronic machines are being used to improve our everyday lives and it is believed that by 2045 humans will become one with machines.
In 1987, the film RoboCop debuted and featured a half-man half-robot cop patrolling the streets of Detroit, but now some car companies are planning on replacing cop cars in Los Angeles with drone cars by 2025. Ramon Galindo gives us a glimpse of the future police force.