Anaesthetic devices are ‘vulnerable to hackers’

Anaesthetic devices are ‘vulnerable to hackers’

It has been found that a type of anaesthetic machine used in NHS hospitals can be hacked and controlled from afar – if left accessible on a hospital computer network. Cyber-security company, CyberMDX, reported that a successful attacker would be able to change the amount of anaesthetic delivered to a patient, as well as being able to silence any alarms designed to alert anaesthetists to danger.

Research by CyberMDX suggested the Aespire and Aestiva 7100 and 7900 devices could be targeted. Nottingham University Hospitals (NUH) NHS Trust has said that these devices are being phased out, but also pointed out that these anaesthetic machines are not connected to the internet or the NUH network – so the risk is very little.

GE, who make the machines, said there was no ‘direct patient risk’.

The likelihood of harm being caused to a patient through any hacking of the devices was “incredibly small” said Dr Helgi Johannsson, consultant anaesthetist and Royal College of Anaesthetists Council Member.
“Patients should be reassured that their anaesthetist will be monitoring them constantly, and will have received many years of training to rectify immediately the situation of a device failure.”

www.bbc.co.uk/technews (10th July 19)

Instagram takes on the bullies

Instagram takes on the bullies

Instagram has added a new anti-bullying tool which prompts users to pause and consider what they are saying. It will also soon offer the targets of bullying the ability to restrict interactions with users who are causing them distress. Instagram has been under pressure to deal with its bullying problem after high profile cases.

Adam Mosseri, chief executive of Instagram, said ‘we can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves. These tools are grounded in a deep understanding of how people bully each other and how they respond to bullying on Instagram, but they’re only two steps on a longer path.’

Instagram said it was using artificial intelligence to recognise when text resembles the kind of posts that are most often reported as inappropriate by users. In one example, a person types “you are so ugly and stupid”, only to be interrupted with a notice saying: “Are you sure you want to post this? Learn more”.

If the user taps “learn more”, a notice informs: “We are asking people to rethink comments that seem similar to others that have been reported.”

The user can ignore the message and post anyway, but Instagram said in early tests that “we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

The tool is being rolled out to English-speaking users at first, with plans to eventually make it available globally.

The company said it will soon roll out an additional tool, called Restrict, designed to help teens filter abusive comments without resorting to blocking others.

“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” Mr Mosseri said. “Some of these actions also make it difficult for a target to keep track of their bully’s behaviour.” Once a user has been restricted, their comments will appear only to themselves. Crucially, a restricted person will not know they have been restricted.

www.bbc.co.uk/technews (8th July 2019)

Google announces AI ethics panel

Google announces AI ethics panel

Google has launched a global advisory council to offer guidance on ethical issues relating to artificial intelligence, automation and related technologies.

The panel was announced at MIT Technology Review’s EmTech Digital, a conference organised by the Massachusetts Institute of Technology. The panel, consisting of eight people, will consider some of Google’s most complex challenges.

Google has come under intense criticism – internally and externally – over how it plans to use emerging technologies. In June 2018 the company said it would not renew a contract it had with the Pentagon to develop AI technology to control drones. Project Maven, as it was known, was unpopular among Google’s staff, and prompted some resignations. In response, Google published a set of AI “principles” it said it would abide by. They included pledges to be “socially beneficial’ and “accountable to people”.

The Advanced Technology External Advisory Council (ATEAC) will meet for the first time in April. In a blog post, Google’s head of global affairs, Kent Walker, said there would be three further meetings in 2019.

It will discuss recommendations about how to use technologies such as facial recognition. Last year, Google’s then-head of cloud computing, Diane Greene, described facial recognition tech as having “inherent bias” due to a lack of diverse data.

In a highly-cited thesis entitled Robots Should Be Slaves, Ms Bryson argued against the trend of treating robots like people.
“In humanising them,” she wrote, “we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility.”
In 2018 she argued that complexity should not be used as an excuse to not properly inform the public of how AI systems operate.
“When a system using AI causes damage, we need to know we can hold the human beings behind that system to account.”

www.bbc.co.uk/technews (26th March 2019)

Twitter hoax warning

Twitter hoax warning

Twitter has issued a warning to users to ignore a hoax suggesting an alternative colour scheme will appear if they change their birth year to 2007. Instead users who fall for the scam will be locked out of their accounts because Twitter prohibits anyone under the age of 13 from using the site.

A spokesman for Twitter declined to confirm how many people have succumbed to the hoax so far. The hoax has been circulating for a few days, with one tweet promoting it having received nearly 20,000 retweets since it was posted on Monday. Many appear to have been taken in by the hoax, with some expressing dismay at having lost access to their accounts.

In another recent scam, verified Twitter accounts were taken over by hackers and used to spread fake links offering free Bitcoin to users.

www.bbc.co.uk/technews (27th March 2019)

Facebook sues over ‘data-grabbing’ quizzes

Facebook sues over ‘data-grabbing’ quizzes

According to Facebook, malicious quiz apps were used to harvest thousands of users’ profile data. Anyone who wants to take the quizzes are asked to install browser extensions, which then lift data ranging from names and profile pictures to private lists of friends. Facebook reported that these were installed about 63,000 times between 2016 and October 2018.

The quizzes, with titles such as ‘What does your eye colour say about you?’ and ‘Do people love you for your intelligence or your beauty?’, gained access to this information via the Facebook Login system – which enables connections between third party apps and Facebook profiles. While the system is intended to verify that such connections are secure, in this case, Facebook says users were falsely told the app would retrieve only a limited amount of public data from their profiles.

Facebook is suing Andrey Gorbachov and Gleb Sluchevsky, of Ukraine, who worked for a company called Web Sun Group. “In total, defendants compromised approximately 63,000 browsers used by Facebook users and caused over £58,000 in damages to Facebook,” the company said in court documents first published by online news site The Daily Beast. The documents accuse the two men of breaking US laws against computer hacking as well as breaching Facebook’s own terms of use.

www.bbc.co.uk/technews (11th March 2019)

Raspberry Pi opens first High Street store

Raspberry Pi opens first High Street store

The team behind the pocket-sized Raspberry Pi computer is opening its first high street store in the city it was invented – Cambridge. The firm will also offer a new starter kit of parts. Ebden Upton, the founder, hopes the shop will attract customers who are ‘curious’ about the brand.

The store will offer merchandise and advice on the use of the computer which measures 3.4 inches by 2.1 inches and is designed to encourage people to try coding and programming. The computer was the brainchild of the Raspberry Pi Foundation, established by a group of Cambridge scientists in 2006 and launched in 2012. The Raspberry Pi resembles a motherboard with ports and chips exposed, used principally as an educational tool for programming. It has now sold 25 million units globally and remains the best selling British computer.

 

www.bbc.co.uk/technews (7th February 2019)