[…] Streaming’s impact on the way artists make music goes all the way to the top. Take Chris Brown, whose upcoming album Heartbreak on Full Moonhas 40 tracks, and not because he has so much to say. The famously unscrupulous pop star has found a way to boost his streaming numbers, which in turn inflate sale figures, and will, he hopes, send his album shooting up the charts quicker than it otherwise would.
Even Spotify is reportedly gaming the system by paying producers to produce songs that are then placed on the service’s massively popular playlists under the names of unknown, nonexistent artists. This upfront payment saves the company from writing fat streaming checks that come with that plum playlist placement, but tricks listeners into thinking the artists actually exist and limits the opportunities for real music-makers to make money. Spotify did not respond to questions about the accusation*, but this is not the first time Spotify, which pays minuscule streaming fees, has been accused of bilking artists.
A cynic might look at all of this and shrug his shoulders. Craven opportunism has been a part of the music industry since the first concert ticket was sold. But even if the money-grubbing isn’t new, the manner in which it’s grubbed is. And no matter who’s doing it, the effect is the same: Music is devalued.
Ransomware is here to stay and is only going to get more dangerous as cybercriminals move towards increasingly sophisticated forms of the cryptographic malware to carry out targeted attacks.
This grim forecast is made by Kapersky Lab in its newly released Ransomware in 2016 – 2017report – but it isn’t all completely bad news, because researchers believe that the competition the underground ransomware market will lead to some families being killed off in an “intra-species massacre”.
Cybercriminals are still making plenty of money by exploiting victims with ransom demands ranging from a couple of hundred to a couple of thousands dollars. But many of these types of attack use random large-scale spam email campaigns in the hope of luring in victims.
Now, however, some criminals are specifically targeting a specially selected enterprise network, infecting them via specially crafted phishing emails then extorting much higher ransom payments from victims.
Propaganda on social media is being used to manipulate public opinion around the world, a new set of studies from the University of Oxford has revealed.
From Russia, where around 45% of highly active Twitter accounts are bots, to Taiwan, where a campaign against President Tsai Ing-wen involved thousands of heavily co-ordinated – but not fully automated – accounts sharing Chinese mainland propaganda, the studies show that social media is an international battleground for dirty politics.
The reports, part of the Oxford Internet Institute’s Computational Propaganda Research Project, cover nine nations also including Brazil, Canada, China, Germany, Poland, Ukraine, and the the United States. They found “the lies, the junk, the misinformation” of traditional propaganda is widespread online and “supported by Facebook or Twitter’s algorithms” according to Philip Howard, Professor of Internet Studies at Oxford.
At their simpler end, techniques used include automated accounts to like, share and post on the social networks. Such accounts can serve to game algorithms to push content on to curated social feeds. They can drown out real, reasoned debate between humans in favour of a social network populated by argument and soundbites and they can simply make online measures of support, such as the number of likes, look larger – crucial in creating the illusion of popularity.
The Trump era has brought a change of fortune for a Silicon Valley software company founded by presidential adviser Peter Thiel — turning it from a Pentagon outcast to a player with three allies in Defense Secretary James Mattis’ inner circle.
At least three Pentagon officials close to Mattis, including his deputy chief of staff and a longtime confidante, either worked, lobbied or consulted for Palantir Technologies, according to ethics disclosures obtained by POLITICO. That’s an unusually high number of people from one company to have such daily contact with the Pentagon leader, some analysts say.
It also represents a sharp rise in prominence for the company, which just months ago could barely get a meeting in the Pentagon. Last year, Palantir even had to go to court to force its way into a competition for a lucrative Army contract.
Thiel was one of the few Silicon Valley titans to openly support Donald Trumpduring the campaign, a role that gave him a prime speaking slot at last summer’s Republican convention. He has since acted as a key adviser arranging meetings among the president and other tech executives. While there’s no evidence he had a direct hand in these specific Pentagon hires, analysts say they absolutely show his growing influence in the administration, where he holds no formal role.
“It is unusual to have several people with close ties to a particular contractor working in close proximity to the Defense secretary,” said Loren Thompson, a leading defense consultant. “It’s probably just a coincidence that several people with Palantir ties are around Mattis, but it certainly doesn’t look good.”
[..] So far, seven people have died as a result of the attack and 48 were injured. It follows a separate incident in March when pedestrians were hit by a car on Westminster Bridge, and an attack in May in which concertgoers in Manchester were assaulted by a suicide bomber. According to the prime minister, the terror attacks are not linked by “common networks”, but the close proximity of these tragedies are certain to create a heightened urgency for politicians to demonstrate that something is being done to prevent another
“Everybody needs to go about their lives as they normally would,” Prime Minister May told reporters in a statement. “Our society should continue to function in accordance with our values.” But that was the extent of May’s acknowledgement that society should not allow terrorism to dictate how we live. She then shifted to statements like, “There is, to be frank, far too much tolerance of extremism in our country.” It was an odd thing to say. Is there really a “tolerance of extremism” in the western world, or is it more a case of not wanting to sacrifice freedoms in accordance with the wishes of terrorists?
The statement was light on particulars, but greater policing of the internet was a key point that May hammered on multiple times. “We cannot allow this ideology the safe space it needs to breed,” she said. “Yet that is precisely what the internet and the big companies that provide internet-based services provide.” Her most specific and unnerving comment was, “We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning.”
British Prime Minister Theresa May wasted no time after yesterday’s London Bridge terror attack in announcing that she will be pushing a new series of international agreements aimed at global regulation of speech on the Internet, claiming that extremists have been using “safe spaces online” in their terror attacks.
While this is being couched today as a reaction to the London attack, the reality is that this is a long-standing goal of Britain’s Tory government, with the Conservative Party’s current manifesto vowing efforts to force Internet providers to participate in “counter-extremism” efforts that would tightly regulate speech.
The manifesto’s plan goes well beyond just terrorism, looking to regulate speech broadly defined by the ruling party as “harmful,” and also to severely curtail the access of pornographic materials on the Internet. The pornography angle is, obviously, not being mentioned in connection to the London attack.
[…] The example highlights the dangers of jumping to conclusions in the murky world of cyber attack and defense, as tools once only available to government intelligence services find their way into the computer criminal underground.
Security experts refer to this as “the attribution problem”, using technical evidence to assign blame for cyber attacks in order to take appropriate legal and political responses.
These questions echo through the debate over whether Russia used cyber attacks to influence last year’s U.S. presidential elections and whether Moscow may be attempting to disrupt national elections taking place in coming months across Europe.
The topic is a big talking point for military officials and private security researchers at the International Conference on Cyber Conflict in Tallin this week. It has been held each year since Estonia was swamped in 2007 by cyber attacks that took down government, financial and media websites amid a dispute with Russia. Attribution for those attacks remains disputed.
After months of categorically denying Russian involvement in cyberattacks during last year’s U.S. presidential elections, Russian President Vladimir Putin on Thursday said that while the Kremlin has never used state-sponsored cyberattacks to meddle in other countries’ elections, some “patriotically minded” volunteer hackers may have acted on their own to defend Russian interests.
“Hackers can be anywhere, and pop out from anywhere in the world,” Putin said in an address to Russian and foreign media during the opening day of an annual economic forum held in St. Petersburg.
The Russian president compared hackers to artists, who can act creatively, particularly when they are motivated by international relations and in the defense of Russia’s interests.
“If they woke up today, read that there is something happening in interstate relations,” he said. “If they are patriotic, they start contributing, as they see it, in the fight against those who do not speak well about Russia.”
One persistent criticism of Silicon Valley is that it no longer works on big, world-changing ideas. Every few months, a dumb start-up will make the news — most recently the one selling a $700 juicer — and folks outside the tech industry will begin singing I-told-you-sos.
But don’t be fooled by expensive juice. The idea that Silicon Valley no longer funds big things isn’t just wrong, but also obtuse and fairly dangerous. Look at the cars, the rockets, the internet-beaming balloons and gliders, the voice assistants, drones, augmented and virtual reality devices, and every permutation of artificial intelligence you’ve ever encountered in sci-fi. Technology companies aren’t just funding big things — they are funding the biggest, most world-changing things. They are spending on ideas that, years from now, we may come to see as having altered life for much of the planet.
At the same time, the American government’s appetite for funding big things — for scientific research and out-of-this-world technology and infrastructure programs — keeps falling, and it may decline further under President Trump.
This sets up a looming complication: Technology giants, not the government, are building the artificially intelligent future. And unless the government vastly increases how much it spends on research into such technologies, it is the corporations that will decide how to deploy them.
[…] The connection to the N.S.A. was particularly chilling. Starting last summer, a group calling itself the “Shadow Brokers” began to post software tools that came from the United States government’s stockpile of hacking weapons.
The attacks on Friday appeared to be the first time a cyberweapon developed by the N.S.A., funded by American taxpayers and stolen by an adversary had been unleashed by cybercriminals against patients, hospitals, businesses, governments and ordinary citizens.
Something similar occurred with remnants of the “Stuxnet” worm that the United States and Israel used against Iran’s nuclear program nearly seven years ago. Elements of those tools frequently appear in other, less ambitious attacks.
The United States has never confirmed that the tools posted by the Shadow Brokers belonged to the N.S.A. or other intelligence agencies, but former intelligence officials have said that the tools appeared to come from the N.S.A.’s “Tailored Access Operations” unit, which infiltrates foreign computer networks. (The unit has since been renamed.)
In the early years of the internet, it was revolutionary to have a world of information just a click away from anyone, anywhere, anytime. Many hoped this inherently democratic technology could lead to better-informed citizens more easily participating in debate, elections and public discourse.
Today, though, many observers are concerned that search algorithms and social media are undermining the quality of online information people see. They worry that bad information may be weakening democracy in the digital age.
The problems include online services conveying fake news, splitting users into “filter bubbles” of like-minded people and enabling users to unwittingly lock themselves up in virtual echo chambers that reinforce their own biases.
These concerns are much discussed, but have not yet been thoroughly studied. What research does exist has typically been limited to a single platform, such Twitter or Facebook. Our study of search and politics in seven nations – which surveyed the United States, Britain, France, Germany, Italy, Poland and Spain in January 2017 – found these concerns to be overstated, if not wrong. In fact, many internet users trust search to help them find the best information, check other sources and discover new information in ways that can burst filter bubbles and open echo chambers.
Amy Goodman and Nermeen Shaikh speak with investigative reporter Barrett Brown, who recently completed a four-year prison sentence related to the hacking of the private intelligence firm Stratfor, which exposed how the firm spied on activists on behalf of corporations. He was released from prison earlier this year but was unexpectedly rearrested late last month, one day ahead of a scheduled interview for an upcoming PBS documentary. Brown was detained for four days and then released without receiving any formal written explanation for the arrest. (Democracy Now!)
On Friday afternoon, NHS hospitals across England and Scotland fell victim to a cyberattack that caused ambulances to be diverted, equipment to shut down, and clinical services to be disrupted.
The attack has prompted fears among commentators and on social media of a deliberate attempt to damage the NHS, or even to interfere in the UK election. But early evidence suggests it was neither deliberately targeted against hospitals, nor aimed at health data.
It wasn’t just NHS computers that were affected. It also hit major corporations, such as Spanish telecoms giant Telefonica – the parent company of the UK mobile network O2 – as well as computer systems in Russia, the USA, Japan and France.
Identifying the source of a cyber attack is a lengthy process usually requiring forensic examination of both the code used in the attack and how it spread across the internet, meaning we don’t yet know with certainty how the NHS attack spread.
The NHS computer systems were hit by what’s known as ransomware, which locks the files on any affected machine and makes it unusable unless its owner pays a set amount, usually in the virtual current Bitcoin, to an anonymous account.
NHS services across England and some in Scotland have been hit by a large-scale cyber-attack.
Staff cannot access patient data, which has been scrambled by ransomware. There is no evidence patient data has been compromised, NHS Digital has said.
NHS England has declared a major incident. The BBC understands up to 25 NHS organisations and some GP practices have been affected.
It comes amid reports of cyber-attacks affecting organisations worldwide.
A Downing Street Spokesman said Prime Minister Theresa May was being kept informed of the situation, while Health Secretary Jeremy Hunt is being briefed by the National Cyber Security Centre.
The US justice department is refusing to disclose FBI documents relating to Donald Trump’s highly contentious election year call on Russia to hack Hillary Clinton’s emails.
Senior DoJ officials have declined to release the documents on grounds that such disclosure could “interfere with enforcement proceedings”. In a filing to a federal court in Washington DC, the DoJ states that “because of the existence of an active, ongoing investigation, the FBI anticipates that it will … withhold all records”.
The statement suggests that Trump’s provocative comment last July is being seen by the FBI as relevant to its own ongoing investigation.
[…] The then Republican presidential candidate ignited an instant uproar when he made his controversial comment at a press conference in Florida on 27 July. By that time Russia had already been accused by US officials of hacking Democratic National Committee emails in a bid to sway the election.
“I will tell you this, Russia: if you’re listening, I hope you’re able to find the 30,000 emails that are missing,” Trump said, referring to a stash of emails that Clinton had deleted from her personal server dating from her time as US secretary of state.
Later that day, the Republican candidate posted a similarly incendiary remark on Twitter: “If Russia or any other country or person has Hillary Clinton’s 33,000 illegally deleted emails, perhaps they should share them with the FBI!”
When the storm turns out to be less severe than the warnings, there’s always a sigh of relief–and maybe a bit of over-confidence after the fact. If fans of the European Union felt better after populist Geert Wilders came up short in the Dutch elections in March, they also took heart from the absence of anti-E.U. firebrands among the leading contenders for this fall’s German elections. Then came May 7. The victory of Emmanuel Macron over Marine Le Pen in France’s presidential elections signaled that “the season of growth of populism has ended,” Antonio Tajani, president of the European Parliament, said on May 8.
Not so fast. Europeans will soon remember that elections are never the end of anything–they’re a beginning. And whether the issue is unelected Eurocrats’ forcing voters to abide by rules they don’t like or fears that borders are insecure, there are good reasons to doubt that the anti-E.U. fever has broken. France’s Macron now faces powerful opposition on both the far right and the far left. Hungary and Poland are becoming increasingly illiberal. Brexit negotiations are getting ugly. And resentment toward the E.U. is still rising throughout Europe.
In the U.S., President Donald Trump may be pushing what increasingly resembles a traditional Republican agenda, but polls show that his supporters are still eager for deeper disruption. Trump’s embrace of Turkey’s Recep Tayyip Erdogan, Egypt’s Abdul Fattah al-Sisi and the Philippines’ Rodrigo Duterte suggests a lasting affinity with aggressive strongmen. His chief adviser and nationalist muse, Stephen Bannon, may be under fire, but he’s still there. The Trump presidency has only just begun.
In short, nationalism is alive and well, partly because the problems that provoked it are still with us. Growing numbers of people in the world’s wealthiest countries still fear that globalization serves only elites who care nothing about nations and borders. Moderate politicians still offer few effective solutions.
September 17th changed everything.
On that day in 2013, Oxford University published an innocuously titled academic paper by two mostly unknown economists. But “The Future of Employment” wasn’t just another number-crunching exercise in opacity by a couple of dreary scientists. No, their bombshell report portended a coming robot apocalypse that could change the nature of human civilization, and perhaps even human beings themselves.
Thankfully, the forthcoming carnage described by Carl Benedikt Frey and Michael A. Osborne isn’t a doomsday scenario where Skynet systematically wipes out humankind, or a darkly lit near-future where attractive Replicants violently struggle to make sense of their emerging emotions in a perpetually damp Los Angeles.
Instead, the economists previewed an all-too-real world where the second-richest man on the planet — Amazon’s Jeff Bezos — gleefully parades around like Sigourney Weaver in a massive robotic exoskeleton built by Hankook Mirae Technology.
They presaged the impending doom from robots like Handle, the Michael Jordan-esque robot built by Boston Dynamics. Handle can leap like a superhero, can run a marathon in under three hours and, if Softbank CEO Masayoshi Son is right, will probably be smarter than you in just a few decades.
They foresaw a future with the likes of Gordon, the “first robotic barista in the U.S.” Gordon can serve “about 120 coffees in an hour.” They also predicted the likes of Otto, the self-driving big-rig designated by Uber to deliver truckloads of beer to thirsty consumers. And then there’s Pepper, the empathic, “day-to-day” companion that is not just working in airports and banks, but being “adopted” into Japanese homes … and even “enrolling” in school.
It’s still unclear who hacked incoming French President Emmanuel Macron’s emails. But what does the way they then spread across the internet tell us about the way hackers and political movements work in tandem?
It was a huge story that broke in the very final hours of coverage of France’s presidential election campaign. But whoever dumped the leaked Macron emails online, did not by themselves turn them into a global topic of discussion. That job was left to a network of political activists, aided by bots and automated accounts, and then ultimately signal boosted by the Twitter account of WikiLeaks.
BBC Trending has spoken to the main activist who took the data dump from a fringe message board to the mainstream – and we’ve pieced together the story of how the hack came to light.
- NSA chief: US warned France about Russian hacks before Macron leak
- Russians say they are fed up of hacking accusations following Macron leaks
- Did Macron outsmart Russian hackers by planting fake information?
- Did Russia Hack Macron? The Evidence Is Far From Conclusive
- Russia Probably Hacked Macron, But There’s Still No Clear Proof
- The Macron Leaks: Are They Real, and Is It Russia?
- USA far-right activists ‘helped amplify e-mail leak’
- Macron, Putin and the boomerang effect
- The Macron leak that wasn’t
Ahead of the British general election on June 8, Facebook has deleted tens of thousands of accounts in Britain in its ongoing battle with “fake news” the AP reports. The campaign is part of Facebook’s evolving response to accusations the group was responsible for influencing the US presidential election, through the spread of fake news stories and “filter bubbles”.
“People want to see accurate information on Facebook and so do we. That is why we are doing everything we can to tackle the problem of false news,” said Simon Milner, Facebook’s director of policy for the UK. “To help people spot false news, we are showing tips to everyone . . . on how to identify if something they see is false.”
Simon Milner, the tech firm’s U.K. director of policy, says the platform wants to get to the “root of the problem” and is working with outside organizations to fact check and analyze content around the election. Milner added that Facebook is “doing everything we can to tackle the problem of false news.”
Additionally, on Monday, the social announced a national print advertising campaign in the UK to “educate the British public” about fake news, as part of a concerted global effort to crack down on the false information epidemic it has seen on its platform. The ads suggest that readers should be “skeptical of headlines,” and to “look closely at the URL.” The company says it has made improvements to help them detect fake news accounts more effectively.
Facebook must remove postings deemed as hate speech, an Austrian court has ruled, in a legal victory for campaigners who want to force social media companies to combat online “trolling”.
The case — brought by Austria’s Green party over insults to its leader — has international ramifications as the court ruled the postings must be deleted across the platform and not just in Austria, a point that had been left open in an initial ruling.
The case comes as legislators around Europe are considering ways of forcing Facebook, Google, Twitter and others to rapidly remove hate speech or incitement to violence.
Germany’s cabinet approved a plan last month to fine social networks up to 50 million euros ($55 million) if they fail to remove such postings quickly and the European Union is considering new EU-wide rules.
Look, let’s just start with the basics: there are some bad people out there. Even if the majority of people are nice and well-meaning, there are always going to be some people who are not. And sometimes, those people are going to use the internet. Given that as a starting point, at the very least, you’d think we could deal with that calmly and rationally, and recognize that maybe we shouldn’t blame the tools for the fact that some not very nice people happen to use them. Unfortunately, it appears to be asking a lot these days to expect our politicians to do this. Instead, they (and many others) rush out immediately to point the fingers of blame for the fact that these “not nice” people exist, and rather than point the finger of blame at the not nice people, they point at… the internet services they use.
The latest example of this is the UK Parliament that has released a report on “hate crime” that effectively blames internet companies and suggests they should be fined because not nice people use them.
[…] This is the kind of thing that sounds good to people who (a) don’t understand how these things actually work and (b) don’t spend any time thinking through the consequences of such actions.
First off, it’s easy for politicians and others to sit there and assume that “bad” content is obviously bad. The problem here is twofold: first, there is so much content showing up that spotting the “bad” stuff is not nearly as easy as people assume, and second, because there’s so much content, it’s often difficult to understand the context enough to recognize if something is truly “bad.” People who think this stuff is obvious or easy are ignorant. They may be well-meaning, but they’re ignorant.
In June 2013, a young American postgraduate called Sophie was passing through London when she called up the boss of a firm where she’d previously interned. The company, SCL Elections, went on to be bought by Robert Mercer, a secretive hedge fund billionaire, renamed Cambridge Analytica, and achieved a certain notoriety as the data analytics firm that played a role in both Trump and Brexit campaigns. But all of this was still to come. London in 2013 was still basking in the afterglow of the Olympics. Britain had not yet Brexited. The world had not yet turned.
“That was before we became this dark, dystopian data company that gave the world Trump,” a former Cambridge Analytica employee who I’ll call Paul tells me. “It was back when we were still just a psychological warfare firm.”
Was that really what you called it, I ask him. Psychological warfare? “Totally. That’s what it is. Psyops. Psychological operations – the same methods the military use to effect mass sentiment change. It’s what they mean by winning ‘hearts and minds’. We were just doing it to win elections in the kind of developing countries that don’t have many rules.”
Why would anyone want to intern with a psychological warfare firm, I ask him. And he looks at me like I am mad. “It was like working for MI6. Only it’s MI6 for hire. It was very posh, very English, run by an old Etonian and you got to do some really cool things. Fly all over the world. You were working with the president of Kenya or Ghana or wherever. It’s not like election campaigns in the west. You got to do all sorts of crazy shit.”
On that day in June 2013, Sophie met up with SCL’s chief executive, Alexander Nix, and gave him the germ of an idea. “She said, ‘You really need to get into data.’ She really drummed it home to Alexander. And she suggested he meet this firm that belonged to someone she knew about through her father.”
Who’s her father?
Eric Schmidt – the chairman of Google?
“Yes. And she suggested Alexander should meet this company called Palantir.”
I appeared at an event in New York this week with Edward Snowden to discuss how computers can be a tool for liberation instead of coercive control. The resounding optimistic feeling was that while networks can let Facebook gut our future, they can also be used to seize it.
I appeared at an event in New York this week with Edward Snowden to discuss how computers can be a tool for liberation instead of coercive control. The resounding optimistic feeling was that while networks can let Facebook gut our future, they can also be used to seize it.
These institutions use the information to circumvent hard won constitutional protections. Western military contractors export these tools to oppressive dictatorships, creating “turnkey surveillance states”. In Ethiopia, the ruling junta has used hacking tools to break into the computers of exiled dissidents in the USA. The information they stole was used to target activists in Ethiopia for arbitrary detention and torture.
In my science fiction novel Walkaway, I see an optimistic escape from the looming surveillance disaster. It imagines people oppressed by surveillance might “walk away” and found a parallel society where citizens’ technological know-how creates a world of fluid, improvisational technological play.
Leading French presidential candidate Emmanuel Macron’s campaign said on Friday it had been the target of a “massive” computer hack that dumped its campaign emails online 1-1/2 days before voters choose between the centrist and his far-right rival, Marine Le Pen.
Macron, who is seen as the frontrunner in an election billed as the most important in France in decades, extended his lead over Le Pen in polls on Friday.
As much as 9 gigabytes of data were posted on a profile called EMLEAKS to Pastebin, a site that allows anonymous document sharing. It was not immediately clear who was responsible for posting the data or if any of it was genuine.
In a statement, Macron’s political movement En Marche! (Onwards!) confirmed that it had been hacked.
“The En Marche! Movement has been the victim of a massive and co-ordinated hack this evening which has given rise to the diffusion on social media of various internal information,” the statement said.
An interior ministry official declined to comment, citing French rules that forbid any commentary liable to influence an election, which took effect at midnight on Friday (2200 GMT).
- What we know about the massive computer hack against Macron
- Russia blamed as Macron campaign blasts ‘massive hacking attack’
- French media ordered by electoral commission not to publish content of Macron leaks
- France starts probing ‘massive’ hack of emails and documents reported by Macron campaign
- Former Clinton aides warn of Russian influence after Macron leak
The UK government has secretly drawn up more details of its new bulk surveillance powers – awarding itself the ability to monitor Brits’ live communications, and insert encryption backdoors by the backdoor.
In its draft technical capability notices paper [PDF], all communications companies – including phone networks and ISPs – will be obliged to provide real-time access to the full content of any named individual within one working day, as well as any “secondary data” relating to that person.
That includes encrypted content – which means that UK organizations will not be allowed to introduce true end-to-end encryption of their users’ data but will be legally required to introduce a backdoor to their systems so the authorities can read any and all communications.
In addition, comms providers will be required to make bulk surveillance possible by introducing systems that can provide real-time interception of 1 in 10,000 of its customers. Or in other words, the UK government will be able to simultaneously spy on 6,500 folks in Blighty at any given moment.
America’s working class is falling further behind.
The rich-poor gap — the difference in annual income between households in the top 20 percent and those in the bottom 20 percent — ballooned by $29,200 to $189,600 between 2010 and 2015, based on Bloomberg calculations using U.S. Census Bureau data.
Computers and robots are taking over many types of tasks, shoving aside some workers while boosting the productivity of specialized employees, contributing to the gap.
“Technological developments have increasingly replaced low- and mid-skilled jobs while complementing higher-skilled jobs,” said Chad Sparber, an associate professor and chair of the economic department at Colgate University.
This shift is predicted to continue. About 38 percent of U.S. jobs could be at high risk of automation by the early 2030s, according to a study by PricewaterhouseCoopers LLP. The “most-exposed” industries include retail and wholesale trade, transportation and storage, and manufacturing, with less-educated workers facing the biggest challenges.
Political advertising that is only seen by its intended recipients is a greater cause for concern than “fake news” in the spread of misinformation, according to the director for a leading fact-checking charity in the UK.
So-called “dark ads” have emerged as a method of advertising that utilises data obtained by the likes of Facebook and Google to customise political campaigns.
They can be served directly to users of Facebook and via Google’s widely used double-click technology which serves ads to millions of websites.
These two giants account for around a half of the UK £10bn a year digital advertising market.
Wikipedia founder Jimmy Wales is launching a crowd-funded news service where supporters can pay for a say in the topics being covered.
The internet entrepreneur has created Wikitribune, a news initiative which says it will see professional journalists and community contributors produce “fact-checked, global news stories”.
The new site will be free to use, but also accept donations from monthly “supporters” who will then be able to suggest topics to be covered – while the site also says it will publish full transcripts of interviews where possible as part of transparency plans.
“Wikitribune is news by the people and for the people,” Wales said.
The syringe slides in between the thumb and index finger. Then, with a click, a microchip is injected in the employee’s hand. Another “cyborg” is created.
What could pass for a dystopian vision of the workplace is almost routine at the Swedish start-up hub Epicenter. The company offers to implant its workers and start-up members with microchips the size of grains of rice that function as swipe cards: to open doors, operate printers or buy smoothies with a wave of the hand.
“The biggest benefit, I think, is convenience,” said Patrick Mesterton, co-founder and chief executive of Epicenter. As a demonstration, he unlocks a door merely by waving near it. “It basically replaces a lot of things you have, other communication devices, whether it be credit cards or keys.”
The technology itself is not new: Such chips are used as virtual collar plates for pets, and companies use them to track deliveries. But never before has the technology been used to tag employees on a broad scale. Epicenter and a handful of other companies are the first to make chip implants broadly available.
And as with most new technologies, it raises security and privacy issues. Although the chips are biologically safe, the data they generate can show how often employees come to work or what they buy. Unlike company swipe cards or smartphones, which can generate the same data, people cannot easily separate themselves from the chips.