Search This Blog

Monday, July 18, 2016

SpaceX Launches Critical Space Station Docking Port for NASA

As reported by CBSNewsA SpaceX Falcon 9 rocket roared to life and streaked into space early Monday, boosting a Dragon cargo ship into orbit loaded with nearly 5,000 pounds of equipment and supplies bound for the International Space Station, including a critical docking mechanism needed by U.S. crew capsules now under construction.

SpaceX also pulled off its fifth first-stage landing and its second touchdown at the Cape Canaveral Air Force Station. Recovering, refurbishing and eventually re-launching Falcon 9 stages is a key element in company founder Elon Musk's ongoing attempt to dramatically lower the cost of spaceflight.
Musk tweeted about it:
But the first-stage recovery Monday was a purely secondary objective.
The mission's primary goal was to boost the SpaceX Dragon cargo ship into orbit and along with it, the long-awaited International Docking Adapter, or IDA.
The half-ton Boeing-built component will replace one that was destroyed in a June 2015 Falcon 9 launch failure and keep NASA on track for initial test flights of the new crew ferry ships in 2017 or 2018.
Also on board the Dragon: 815 pounds of food, clothing and other crew supplies, 280 pounds of spacewalk equipment, 119 pounds of Russian hardware, 617 pounds of U.S. station equipment and spare parts and a full ton of research samples and equipment, including an innovative device the size of a small candy bar to carry out the first gene sequencing in space.
"We're really interested in how this works in microgravity; it's never been done before," said astronaut Kate Rubins, a molecular biologist launched to the space station July 6. "We're going to be trying to do the first DNA sequencing in space, and it'll be a combination of a bacteria, a virus and a mouse genome that we'll be sequencing."
The MinION gene sequencer works by measuring very slight changes in electrical conductivity as DNA components pass through a biological pore. Rubins said the research she hopes to carry out will help scientists better understand the mechanisms behind bone loss, muscle atrophy and other negative aspects of living in microgravity.
"It also has a benefit for Earth-based research as well," she said. "When we do things in a remote environment up here, we can understand how these technologies might work in remote places on Earth that don't have access to good medical care," Rubins said.
Other experiments of interest aboard the Dragon include a study to determine the effects of weightlessness on microorganisms that originated in the Chernobyl nuclear disaster; a study to learn how beating heart cells are affected by weightlessness; tests of an experimental ship tracking system; and another to evaluate a new type of heat exchanger to control spacecraft temperatures.
Also on board: 12 mice that will be returned to Earth aboard the Dragon in late August. The rodents will be studied after the flight to determine how they were affected by exposure to weightlessness.
"This research cargo will support over 280 experiments," said Camille Alleyne, a space station program scientist at the Johnson Space Center in Houston. "These are in a variety of disciplines, including human research, biology and biotechnology, physical sciences, Earth and space sciences, technology demonstration and education."
The latter category includes a cache of tomato seeds that will be distributed to school kids after the flight to find out what effects, if any, might be due to time spent in weightlessness.
Putting on a dramatic overnight sky show under a nearly-full moon, the 20-story tall Falcon 9's nine first stage engines ignited with a torrent of fiery exhaust at 12:45 a.m. EDT, boosting the rocket away from pad 40 at the Cape Canaveral Air Force Station.
Liftoff came just 31 hours after launch of a Russian Progress supply ship from the Baikonur Cosmodrome in Kazakhstan that is carrying 5,800 pounds of supplies and equipment. If all goes well, the Progress MS-03/64P supply ship will reach the station Monday evening, docking at the Earth-facing Pirs module around 8:22 p.m.
The 229-foot-tall Falcon 9, generating 1.5 million pounds of thrust, initially climbed straight up from launch complex 40 and then arced away to the northeast as it thundered directly into the plane of the space station's orbit.
Spectators, including VIPs and the media, were forced to leave the nearby Kennedy Space Center an hour before liftoff because of recent modifications designed to enable the Dragon's landing parachutes to deploy in the event of a mishap that might leave the capsule -- and its valuable cargo -- intact after a booster failure.
Because forecasters predicted winds that could blow a descending capsule back toward the launch site after an in-flight accident, Air Force range safety officers ordered non-essential personnel to leave the area in case of a landing that could result in the release of toxic propellants.
But the climb away from the Cape was picture perfect. The first-stage engines burned for two minutes and 21 seconds, propelling the rocket out of the dense lower atmosphere before shutting down. A moment later, the first stage fell away and the single engine powering the second stage ignited to continue boosting the Dragon toward orbit.
The first stage, meanwhile, flipped around tail first and carried out the first of three engine firings to halt its forward velocity and begin the flight back to a landing at the Cape Canaveral Air Force Station.
Four minutes later, three of the first-stage engines re-ignited to slow the ship for descent back into the discernible atmosphere. Then, about seven-and-a-half minutes after launch, the engines fired back up a third time for landing.
Spectators in nearby Port Canaveral were treated to a spectacular show as the stage descended through a clear night sky atop a jet of fiery exhaust, deploying four landing legs and settling to a smooth touchdown on a pad known as Landing Zone 1. The landing was heralded by two sonic booms that rumbled across the Space Coast.
Going into the mission, SpaceX had made 9 attempts to land a Falcon 9 first stage, chalking up four successes: three on an off-shore "drone ship" and one at the Air Force station last December. The record now stands at five successes in 10 attempts. If all goes well, SpaceX hopes to re-launch its first used booster this fall.
While the first stage was flying back to Florida, the Falcon 9's second stage was boosting the Dragon toward space. Finally, about nine minutes and 37 seconds after liftoff, the Dragon separated from the second stage. A few minutes after that, two solar arrays unfolded and locked in place.
If all goes well, the Dragon will carry out a computer-controlled rendezvous, approaching the station from below early Wednesday. Expedition 48 commander Jeff Williams, operating the lab's robot arm, plans to latch onto the Dragon around 7 a.m. Flight controllers in Houston then will take over arm operations, moving the cargo ship into position for berthing at the forward Harmony module's Earth-facing port.

Friday, July 15, 2016

The Hysteria Over Tesla’s Autopilot Has Been Blown out of Proportion

As reported by BGR.comOver the past few weeks, Tesla’s Autopilot software has been unfairly singled out and scrutinized as a piece of technology that Tesla recklessly deployed before being 100% ready for day-to-day use. This hysteria against all things Tesla reached a fever pitch this week when Consumer Reports published a self-righteous piece calling for Tesla to disable Autopilot until they get the technology figured out. And late on Thursday, we learned that even the U.S. Senate Committee on Commerce, Science and Transportation directed a letter to Elon Musk asking him or Tesla representative to answer a few questions.
With all of the commotion, speculation and, at times, wild accusations being thrown in Tesla’s direction, you might be forgiven for assuming that Teslas on Autopilot have been running amok like mindless zombies on The Walking Dead and causing accidents by the hundreds.
It’s time to jump back to reality.
Despite an avalanche of hit-pieces and misguided op-eds, it’s far too soon to say with any certainty that Tesla’s Autopilot software has been the direct cause of any specific crash. With respect to the fatal Model S crash that happened in Florida last May, one witness claims that the driver may have been watching a portable DVD player at the time of the crash. Additionally, evidence suggests that the driver (a former Navy SEAL who friends say had a “need for speed“) was likely speeding at the time of the crash.
More recently, it came to light that the widely publicized Model X crash in Montana involving Autopilot was likely the result of a) the feature being used incorrectly and b) the driver ignoring repeated warning alerts to re-assume control of the vehicle.
Just last night, Tesla disclosed that the Model X that crashed and turned itself over on a Pennsylvania highway did not, as it turns out, have the Autopilot software turned on.
Now this isn’t meant to blame the victims, but rather to illustrate that perhaps education about Tesla’s Autopilot software would be more beneficial than blindly casting the technology off as something dangerous and to be avoided.
One area where Tesla may have made a misstep lies in the Autopilot name itself. It certainly works well from a marketing perspective, but calling the feature Autopilot arguably gives drivers a false sense of security: the name suggests, even on an unconscious level, that Teslas are futuristic vehicles capable of traversing through any and all road conditions with ease, all without the need for human assistance.
But in reality, Tesla’s Autopilot software in its current incarnation was only designed to be used in specific conditions, a fact Tesla makes a point of emphasizing.

The Autosteer feature in particular, Tesla informs its drivers, should only be used “on highways that have a center divider and clear lane markings, or when there is a car directly ahead to follow.” Tesla also adds that the feature should not be used “on other kinds of roads or where the highway has very sharp turns or lane markings that are absent, faded, or ambiguous.”
Nonetheless, Tesla has indicated that it will soon publish a blog post that more clearly delineates the capabilities and limitations of its Autopilot software.
Consumer Reports, though, isn’t satisfied and takes Tesla to task for selling consumers a “pile of promises about unproven technology.” Driving the point home, Consumer Reports’ VP of consumer policy and mobilization Laura MacCleery added that “consumers should never be guinea pigs for vehicle safety ‘beta’ programs.”
This is grossly misleading. To date, Tesla vehicles with Autopilot engaged are statistically safer than other cars out on the road. But because Tesla is Tesla, any word of a crash where Autopilot software may or may not have been activated gets blown up out of proportion. Thing is, we’ve already been down this road before: remember when the sky was falling back when some media outlets were reporting that Tesla cars were statistically more prone to catching on fire?
Consumer Reports suggests that Tesla disable Autopilot software until it’s ready for prime time use. But such a suggestion completely ignores the realities that govern technology as advanced as self-driving automotive software.
The following statement was made in the wake of the release of Apple Maps and I believe the same applies to Tesla’s Autopilot software.
Unfortunately, like dialect recognition or speech synthesis (think Siri), mapping is one of those technologies that can’t be fully incubated in a lab for a few years and unleashed on several hundred million users in more than a 100 countries in a “mature” state.
Autopilot software can only improve out in the real world when used by real drivers. If we as a society want to live in a world where cars can one day drive themselves, the stark reality is that there will be a growth or learning period involved that will have to take place out in the real world.
On another front, Jalopnik raises a good point in stating that we need to start holding drivers accountable irresponsibly using software designed to make everyone on the road safer.
Consumer Reports argues that Tesla’s Autopilot system, as it sits, is too open for misuse. I don’t buy that. Because nowhere in the system’s marketing, in its instructions, or even its implied promises of capability, does it ever absolve that responsibility that we’ve relied on for over a hundred years. It’s still the responsibility of the person sitting in the driver’s seat, not the responsibility of the company that created it. That’s how we’ve looked at cars since they’ve been invented, and no one is going after Ferrari, for instance, for all the deaths caused when its cars are used improperly. We’re not demanding it electronically limit the speeds of its cars, adjustable only to GPS-informed local speed limits.
And as Tim Stevens of CNET points out, we only hear about Tesla’s Autopilot software when crashes occur, not when they’re avoided.
This controversy is about many things, but at its core, I firmly believe Autopilot is a technology that can and has saved lives. The problem is, we’ll never know how many lives are being saved by the various active safety systems found in modern cars. We’ll only know when they fail to save a life.
But like most items that travel through the meat grinder that is the news cycle, information about Tesla’s Autopilot software has been completely distorted to serve the conclusory arguments that Tesla’s software is half-baked and not safe for drivers to use.
Tesla’s Autopilot feature is far from perfect, and undoubtedly, Tesla can take any number of steps to improve its performance and to better educate its consumer base about its limitations. If the feature malfunctions during normal use and results in an accident, the company deserves to be critiqued. But catapulting weighted accusations at the company based on incomplete and, often times, incorrect information does nothing more than needlessly engender fear.
Lastly, Consumer Reports’ list of four recommendations for Tesla underscore how deeply the company’s technology is misunderstood.
Two of the four recommendations read:
  • Disable Autosteer until it can be reprogrammed to require drivers to keep their hands on the steering wheel
  • Test all safety-critical systems fully before public deployment; no more beta releases
As to the first point, requiring drivers to keep their hands on the wheel runs counter to the very notion of what Autosteer is. What a laughable suggestion.
As to the second, Tesla has said publicly that Autopilot is beta software only in the sense that the system has yet to cumulatively log more than 1 billion miles.
Musk later clarified that Tesla will likely reach that data point sometime within six months.


Friday, July 8, 2016

Tesla Autopilot Crash: What One Model S Owner Has to Say

As reported by Green Car ReportsThe NHTSA has now said it will investigate two separate crashes reportedly involving a Tesla electric car operating with the Autopilot feature engaged.

Media coverage on the May 7 death of a Model S driver in Florida continues to debate the merits of electronic driver-assistance technologies, often using remarkably uninformed language.

One Tesla Model S driver felt it was appropriate to offer his views, to explain to a broader audience how many Tesla drivers view their cars and the Autopilot feature.

Mike Dushane has worked in the automotive industry, in various editorial and product development roles, for 20 years. He lives in the San Francisco area, and has owned a Model S since May 2015.

He tells us he's used pretty much every electronic driver-assistance aid in production. Those include the very first adaptive cruise control systems, which appeared in the U.S. on Mercedes-Benz vehicles in the late 1990s.

What follows in the balance of this article are Dushane's words, edited for clarity and style.

Much speculation about Tesla’s “Autopilot” has surfaced since news of a fatal accident earlier this year that involved the system. Some of that speculation discusses “autonomous cars”, which the Tesla is not.

We paid $2,500 for "Autopilot convenience features" to be enabled in our Tesla Model S, and have since driven thousands of miles using the system. So perhaps I can shed some light on what the Autopilot system does—and how owners actually use it.

When we ordered our car in early 2015, it was my expectation that Tesla would roll out a set of features that make driving safer and less stressful by steering and braking on limited-access roads in normal circumstances. Tesla met my expectations.

I did not expect a car that could drive itself.

Tesla owners on forums and groups actively talk about the potential of the car’s systems, but I think it's accurate to say that we never expected the car to be autonomous. It just doesn't have the hardware to allow that.

There's no laser system to build a computer model of all potential threats and obstacles, just short-range proximity sensors all around, plus front-facing camera and radar systems.

As a driver assistance function, Autopilot really does make driving safer and less stressful. But it has to be used as an aid, not a substitute for driver responsibility.

Other than gimmicks like "summon" (which brings the car a short distance to the driver), Autopilot is just adaptive cruise control plus very good automatic lane centering. That's it.

Those are great features to make a long highway slog less wearing and safer, because the car can react faster than a driver in many situations, but the Autopilot systems are not designed to account for cross traffic, merges, pedestrians, or any number of other situations any driver must be prepared to deal with.

Autopilot will keep the car in a lane and prevent it from banging into other vehicles also traveling in the same direction in that lane.

The Tesla community on the whole gets this. Of thousands of Tesla drivers I've communicated with, none think it's an "autonomous car."

Driving this truth home is the fact that the Autopilot system needs frequent intervention. When I crest a hill, approach a sharp curve, or drive on broken pavement or where the lines are worn, the system often demands attention.

It flashes a warning that I need to assume control—and if I don't immediately do so, it beeps to drive the point home.

When the unexpected happens (traffic ahead screeches to a stop, or another driver looks like they might cut me off, etc.), I often take over before the car reacts or asks me to intervene.

Because of the frequency with which I need to override the system, it would never occur to me to allow the car to operate without me paying attention, least of all at high speed. I can't imagine not being ready to assume control at any time.

Ceding attention when the Autopilot system has so many limitations would be like counting on a car with normal (non-adaptive) cruise control to slow down for traffic in front of it—an irrational expectation.

That said, I've thought a lot about this tragic accident. A man died driving the same car as me, using the same technology I use every day.

Perhaps in Florida, where the accident occurred, where roads are straight, smooth, and flat, a driver could get lulled into inattention more than in Northern California, where the hilly topography and frequent curves require more frequent attention.

I come back to this: On the numbers, one death in 130,000,000 miles is beating the odds overall.

And we don't know the degree to which inattention played a role in this accident. One case in which a system didn't prevent an accident that it wasn't designed to prevent won't change my use of Autopilot. I'm a safer driver when the car has both my attention and that of computer monitoring systems.

I speculate that a potential result of this accident would be that Tesla will force its drivers to keep their hands on the wheel for auto-steer to work—on the assumption that a driver with his hands on the wheel may be more likely to keep his eyes on the road. That's how most other carmakers now program their systems.

I wouldn't be thrilled about that, but if it saves inattentive drivers from their own lack of responsibility, I'd accept it.

And I'd pay $2,500 all over again for adaptive cruise control and hands-on-wheel lane centering as good as Tesla's.

I hope Joshua Brown didn't die in vain. We may never know exactly what was going on in Josh’s car, but it seems possible that evasive driver action could have prevented or mitigated the severity of his crash.

I hope the attention his death has garnered allows consumers to understand the limitations of semi-autonomous driver aids better.

And I hope it leads to much greater awareness that, until fully autonomous cars are available (many years away, I'd guess), we need to (continue to) pay attention behind the wheel.

Thursday, July 7, 2016

Solar Road Technology Comes to Route 66

As reported by Engadget: Solar Roadways' dreams of sunlight-gathering paths are one step closer to taking shape. Missouri's Department of Transportation is aiming to install a test version of the startup's solar road tiles in a sidewalk at the Historic Route 66 Welcome Center in Conway. Okay, it won't be on Route 66 just yet, but that's not the point -- the goal is to see whether or not the technology is viable enough that it could safely be used on regular streets. You should see it in action toward the end of the year.

The tiles will be familiar if you've followed Solar Roadways before. Each one combines a solar cell with LED lighting, a heating element and tempered glass that's strong enough to support the weight of a semi-trailer truck. If successful, the panels will feed the electrical grid (ideally paying for themselves) and make the roads safer by both lighting the way as well as keeping the roads free of rain and snow. They should be easier to repair than asphalt, too, since you don't need to take out whole patches of road to fix small cracks.

Of course, "if successful" is the operative term here. The real litmus test comes if and when Solar Roadways subjects the tiles to the legions of cars traveling on Route 66 and beyond. Missouri has a strong incentive to make that happen, though. As the Transportation Department's Tom Blair observes, it would be odd to push self-driving cars in the state's Road to Tomorrow initiative when the streets aren't as smart as the vehicles using them.


Friday, July 1, 2016

US Federal Gov’t Opens Investigation Into First Known Tesla Autopilot Fatality

As reported by RT.comThe National Highway Transportation Safety Administration (NHTSA) is investigating the first known fatality involving a Tesla Model S where the Autopilot system was active, the company has confirmed.

On May 7, Ohio resident Joshua Brown, 45, was in the driver’s seat of a Tesla Model S in Williston, Florida. The car’s Autopilot was engaged when a tractor-trailer made a left turn in front of the electric vehicle. Brown was killed “when he drove under the trailer,” the Levy Journal reported at the time.“The top of Joshua Brown’s 2015 Tesla Model S vehicle was torn off by the force of the collision.”
After striking the underside of the trailer, the car then continued driving until it left the road, struck a fence, smashed through two other fences and struck a power pole.
On Thursday, Tesla announced that the NHTSA had opened a preliminary evaluation on Wednesday into the performance of Autopilot in the crash. Following the company’s standard practice, it had informed the federal agency of the accident immediately after it occurred.
“Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer,” Tesla said in a blog post.
“Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents,” the company noted.
The truck driver, Frank Baressi, a 62-year-old from Tampa, was not injured in the crash. Charges against him are pending, however, according to the Levy Journal.
 The accident "calls for an examination of the design and performance of any driving aids in use at the time of the crash," the agency said in a statement.
"Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert,” Tesla said. “Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving."
Autopilot had more than 130 million miles driven successfully without a fatality, Tesla said. A fatality occurs every 94 million miles in the US and every 60 million miles globally, the company added.
The NHTSA investigation will look into whether the Autopilot system was working properly at the time of the crash.
The accident "calls for an examination of the design and performance of any driving aids in use at the time of the crash," the agency said in a statement.
Autopilot is currently in its “public beta phase,” and customers must acknowledge that before they can use the system. They are also expected to keep their hands on the wheel while it is engaged, as well as to “maintain control and responsibility” for their vehicles.
"Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert,” Tesla said.“Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving."