Connect with us

Tech

The New Apple Watch Measures Your Blood Oxygen. Now What?

Published

on

The new Apple Watch can be summed up in two words: blood oxygen.

The ability to measure your blood’s oxygen saturation — an overall indicator of wellness — is the most significant new feature in the Apple Watch Series 6, which was unveiled this week and becomes available on Friday. (The watch is otherwise not that different from last year’s Apple watch.) The feature is particularly timely with the coronavirus, because some patients in critical condition with Covid-19 have had low blood oxygen levels.

But how useful is this feature for all of us, really?

I had a day to test the new $399 Apple Watch to measure my blood oxygen level. The process was simple: You open the blood oxygen app on the device, keep your wrist steady and hit the Start button. After 15 seconds, during which a sensor on the back of the watch measures your blood oxygen level by shining lights onto your wrist, it shows your reading. In three tests, my blood oxygen level stood between 99 percent and 100 percent.

I wasn’t quite sure what to do with this information. So I asked two medical experts about the new feature. Both were cautiously optimistic about its potential benefits, especially for research. The ability to constantly monitor blood oxygen levels with some degree of accuracy, they said, could help people discover symptoms for health conditions like sleep apnea.

“Continuous recording of data can be really interesting to see trends,” said Cathy A. Goldstein, a sleep physician at the University of Michigan’s Medicine Sleep Clinic, who has researched data collected by Apple Watches.

But for most people who are relatively healthy, measuring blood oxygen on an everyday basis could be way more information than we need. Ethan Weiss, a cardiologist at the University of California, San Francisco, said he was concerned that blood oxygen readings could breed anxiety in people and lead them to take unnecessary tests.

“It can be positive and negative,” he cautioned. “It could keep people out of doctors’ offices and at home and give them reassurance, but it could also create a lot of anxiety.”

That’s important to remember as smart watches gain new health-monitoring features that give us information about ourselves that we have to figure out how to use. When the Apple Watch Series 4 introduced an electrical heart sensor for people to take electrocardiograms in 2018, it was useful for people with known heart conditions to monitor their health — but doctors warned that it was also a novelty that should not be used to jump to conclusions or for people to self-diagnose heart attacks or other conditions.

And so, here we are again.

A healthy person will usually have blood oxygen levels in the mid- to high 90s. When people have health conditions such as lung disease, sleep disorders or respiratory infections, levels can dip to the 60s to the low 90s, Dr. Goldstein said.

If you buy the Apple Watch and have access to information about your blood oxygen levels all the time, it’s important to have a framework for thinking about the data. Most importantly, you should have a primary care physician with whom you can share the measurements so that you can place it into context with your overall health, like your age and pre-existing conditions, Dr. Goldstein said.

But when it comes to medical advice and diagnosis, always defer to a doctor. If you notice a big dip in your blood oxygen level, it is not necessarily a reason to panic, and you should talk to your doctor to decide whether to investigate. And if you have symptoms of illness, such as fever or a cough, a normal blood oxygen reading shouldn’t be a reason to skip talking to a medical professional, Dr. Goldstein said.

Let a medical expert — not your watch — create the action plan.

Blood oxygen monitoring may be more useful for people who are already known to have health problems, Dr. Weiss said. For example, if someone with a history of heart failure saw lower saturation levels in their blood oxygen during exercise, that information could be shared with a doctor, who could then modify the treatment plan.

The information could also be used to determine whether a sick person should go to the hospital. “If a patient called me and said, ‘I have Covid and my oxygen level is at 80 percent,’ I would say, ‘Go to the hospital,”’ Dr. Weiss said.

In the end, health data on its own isn’t immediately useful, and we have to decide how to make the best use of the information. Apple doesn’t recommend what to do or how to feel about the information, just as a bathroom scale doesn’t tell you you’re overweight and give you a diet plan.

If you find that the data makes you more anxious, you could simply disable the feature, Dr. Goldstein said.

But even if blood oxygen measurement sounds gimmicky today, it’s important to keep an open mind about how new health-monitoring technologies might benefit us in the future. Both Dr. Goldstein and Dr. Weiss pointed to sleep apnea as an area where wearable computers might benefit people. The condition, which causes breathing problems during sleep, affects millions of Americans, but most people never know that they have it.

It’s a bit of a catch-22. If you had symptoms of sleep apnea, which include lower blood oxygen levels, your doctor would order a test. But you probably wouldn’t catch the symptoms while you were asleep, so a study would never be ordered.

The Apple watch will periodically measure your blood oxygen level in the background, including when you are asleep. So if we gather data about ourselves while we’re slumbering, we might discover something unknown about ourselves — or not.

“Until we start doing it, we don’t know whether or not this information can be valuable,” Dr. Goldstein said.

Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Monster Wolf robot with glowing eyes protects Japanese town from bears

Published

on

By

monsterwolf1

Monster Wolf robot was created to scare off bears and other wildlife from towns in Japan.


Video screenshot by Bonnie Burton/CNET

Bears beware. Residents of Takikawa in Japan have turned to a robot wolf to scare off bears who get the urge to roam through their town, located in the central area of Hokkaido. Apparently, bears invading the place is a regular occurrence, as the creatures scavenge for food in residential trash cans.

Usually, the town hires hunters to trap bears and remove them from city limits, but this time residents got more creative with a solution to frighten bears away.

The mechanical robot nicknamed Monster Wolf was created using machine parts from manufacturing company Ohta Seiki located in Hokkaido. Monster Wolf is equipped with infrared sensors that can detect when a bear or other wildlife is in the vicinity, according to SoraNews24.

When a bear or other animal triggers Monster Wolf’s sensors, the robot’s head moves, and its LED red eyes light up. Speakers inside the robot emit a variety of loud sounds — including wolf howls, gunshots and human voices — to startle and drive off the wildlife.

The idea of having a robot drive off encroaching wildlife like bears is apparently very popular in Japan, as 62 communities have their own versions of a Monster Wolf robot in operation, SoraNews24 says. 

Here are some entertaining videos of Monster Wolf in action.



Source link

Continue Reading

Tech

Activists Turn Facial Recognition Tools Against the Police

Published

on

By

In early September, the City Council in Portland, Ore., met virtually to consider sweeping legislation outlawing the use of facial recognition technology. The bills would not only bar the police from using it to unmask protesters and individuals captured in surveillance imagery; they would also prevent companies and a variety of other organizations from using the software to identify an unknown person.

During the time for public comments, a local man, Christopher Howell, said he had concerns about a blanket ban. He gave a surprising reason.

“I am involved with developing facial recognition to in fact use on Portland police officers, since they are not identifying themselves to the public,” Mr. Howell said. Over the summer, with the city seized by demonstrations against police violence, leaders of the department had told uniformed officers that they could tape over their name. Mr. Howell wanted to know: Would his use of facial recognition technology become illegal?

Portland’s mayor, Ted Wheeler, told Mr. Howell that his project was “a little creepy,” but a lawyer for the city clarified that the bills would not apply to individuals. The Council then passed the legislation in a unanimous vote.

Mr. Howell was offended by Mr. Wheeler’s characterization of his project but relieved he could keep working on it. “There’s a lot of excessive force here in Portland,” he said in a phone interview. “Knowing who the officers are seems like a baseline.”

Mr. Howell, 42, is a lifelong protester and self-taught coder; in graduate school, he started working with neural net technology, an artificial intelligence that learns to make decisions from data it is fed, such as images. He said that the police had tear-gassed him during a midday protest in June, and that he had begun researching how to build a facial recognition product that could defeat officers’ attempts to shield their identity.

“This was, you know, kind of a ‘shower thought’ moment for me, and just kind of an intersection of what I know how to do and what my current interests are,” he said. “Accountability is important. We need to know who is doing what, so we can deal with it.”

Mr. Howell is not alone in his pursuit. Law enforcement has used facial recognition to identify criminals, using photos from government databases or, through a company called Clearview AI, from the public internet. But now activists around the world are turning the process around and developing tools that can unmask law enforcement in cases of misconduct.

“It doesn’t surprise me in the least,” said Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology. “I think some folks will say, ‘All’s fair in love and war,’ but it highlights the risk of developing this technology without thinking about its use in the hands of all possible actors.”

The authorities targeted so far have not been pleased. The New York Times reported in July 2019 that Colin Cheung, a protester in Hong Kong, had developed a tool to identify police officers using online photos of them. After he posted a video about the project on Facebook, he was arrested. Mr. Cheung ultimately abandoned the work.

This month, the artist Paolo Cirio published photos of 4,000 faces of French police officers online for an exhibit called “Capture,” which he described as the first step in developing a facial recognition app. He collected the faces from 1,000 photos he had gathered from the internet and from photographers who attended protests in France. Mr. Cirio, 41, took the photos down after France’s interior minister threatened legal action but said he hoped to republish them.

“It’s about the privacy of everyone,” said Mr. Cirio, who believes facial recognition should be banned. “It’s childish to try to stop me, as an artist who is trying to raise the problem, instead of addressing the problem itself.”

Many police officers around the world cover their faces, in whole or in part, as captured in recent videos of police violence in Belarus. Last month, Andrew Maximov, a technologist from the country who is now based in Los Angeles, uploaded a video to YouTube that demonstrated how facial recognition technology could be used to digitally strip away the masks.

In the simulated footage, software matches masked officers to full images of officers taken from social media channels. The two images are then merged so the officers are shown in uniform, with their faces on display. It’s unclear if the matches are accurate. The video, which was reported earlier by a news site about Russia called Meduza, has been viewed more than one million times.

“For a while now, everyone was aware the big guys could use this to identify and oppress the little guys, but we’re now approaching the technological threshold where the little guys can do it to the big guys,” Mr. Maximov, 30, said. “It’s not just the loss of anonymity. It’s the threat of infamy.”

These activists say it has become relatively easy to build facial recognition tools thanks to off-the-shelf image recognition software that has been made available in recent years. In Portland, Mr. Howell used a Google-provided platform, TensorFlow, which helps people build machine-learning models.

“The technical process — I’m not inventing anything new,” he said. “The big problem here is getting quality images.”

Mr. Howell gathered thousands of images of Portland police officers from news articles and social media after finding their names on city websites. He also made a public records request for a roster of police officers, with their names and personnel numbers, but it was denied.

Facebook has been a particularly helpful source of images. “Here they all are at a barbecue or whatever, in uniform sometimes,” Mr. Howell said. “It’s few enough people that I can reasonably do it as an individual.”

Mr. Howell said his tool remained a work in progress and could recognize only about 20 percent of Portland’s police force. He hasn’t made it publicly available, but he said it had already helped a friend confirm an officer’s identity. He declined to provide more details.

Derek Carmon, a public information officer at the Portland Police Bureau, said that “name tags were changed to personnel numbers during protests to help eliminate the doxxing of officers,” but that officers are required to wear name tags for “non-protest-related duties.” Mr. Carmon said people could file complaints using an officer’s personnel number. He declined to comment on Mr. Howell’s software.

Older attempts to identify police officers have relied on crowdsourcing. The news service ProPublica asks readers to identify officers in a series of videos of police violence. In 2016, an anti-surveillance group in Chicago, the Lucy Parsons Lab, started OpenOversight, a “public searchable database of law enforcement officers.” It asks people to upload photos of uniformed officers and match them to the officers’ names or badge numbers.

“We were careful about what information we were soliciting. We don’t want to encourage people to follow officers to playgrounds with their kids,” said Jennifer Helsby, OpenOversight’s lead developer. “It has resulted in officers being identified.”

For example, the database helped journalists at the Invisible Institute, a local news organization, identify Chicago officers who struck protesters with batons this summer, according to the institute’s director of public strategy, Maira Khwaja.

Photos of more than 1,000 officers have been uploaded to the site, Ms. Helsby said, adding that versions of the open-source database have been started in other cities, including Portland. That version is called Cops.Photo, and is one of the places from which Mr. Howell obtained identified photos of police officers.

Mr. Howell originally wanted to make his work publicly available, but is now concerned that distributing his tool to others would be illegal under the city’s new facial recognition laws, he said.

“I have sought some legal advice and will seek more,” Mr. Howell said. He described it as “unwise” to release an illegal facial recognition app because the police “are not going to appreciate it to begin with.”

“I’d be naïve not to be a little concerned about it,” he added. “But I think it’s worth doing.”



Source link

Continue Reading

Tech

Amazon parcel scam targets woman eight months after her death

Published

on

By

A spokeswoman for Amazon said: “Third-party sellers are prohibited from sending unsolicited packages to customers and we take action on those who violate our policies, including withholding payments, suspending or removing selling privileges, or working with law enforcement. We’ve taken action on the account in question.”

Source link

Continue Reading

Trending