Skip to main content

BP’s New Interactive Gas Pump

I don’t know about you, but I actually don’t mind a little peace and quiet when I’m pumping gas. But with BP’s latest development, going to the gas station may turn into a whole new interactive experience.bp-ai-sign

“Through February, the oil and gas giant BP is testing a new kind of pump technology at several gas stations in the Chicago and New York City metro areas. The technology includes a tablet that lets customers interact with an assistive, AI-based personality called “Miles.””

“BP says it’s installed proprietary voice-activation technology inside what it’s calling its new “personality pumps,” which let customers do everything from play music through Pandora to answering music trivia to whipping up a quick, fun video e-card they can send to a friend. “Miles” is a friendly sounding helper and keeps things light since, of course, most people will only be there for the 5 minutes or less it takes to put gas in the tank.” (Source: BGR)

This is one of many examples of how new technology is being put to use for what I would gather under the umbrella of marketing-related purposes. BP, I’m assuming, sees Miles as a way to differentiate what to large extent is a commodity product, and as a way to attract millennials to its brand.

Historically, the work that we do at Critical Link has been more what we have traditionally called “industrial strength applications”, but more recently we have also been involved in apps that are used for promotion/advertising and other purposes, similar to BP’s Miles.

As for Miles, I know that it was designed to appeal to millennials, not old guys like me. But living in the snow belt in Upstate New York as I do, I wouldn’t mind if they extended Miles’ functionality so that it could robotically pump my gas. Maybe they’ll roll that out in the next generation!

 

Not your great-great-grandmother’s player piano

steinway-spirioA friend of mine recently received an invitation to attend a demo of the Steinway Spirio, a high-res player piano. Anyway, my friend wasn’t able to go and see Spirio for herself, but she thought I might be interested in it from a technical standpoint. So I did a bit of digging around to see how Spirio works.

This is not your great-great-grandmother’s player piano, which probably batted out tunes like “A Bicycle Built for Two.” No, the Spirio plays musicians like Vladimir Horowitz, George Gershwin, and Thelonious Monk performing classical, popular, and jazz works. (They also have performances by artists who are still alive!) And the Spirio isn’t run from the paper piano rolls that had the notes pressed into them. Nor does it use the low-res, MIDI-file (and hardware) approach that more updated player pianos take.

For starters, here’s what Steinway says about Spirio:

“The Steinway Spirio is a new high-resolution player piano that provides an unrivaled musical experience, indistinguishable from a live performance. Steinway Spirio brings your piano to life with the world’s greatest music, independent of the listener’s playing ability.”

“The technology used in the Steinway Spirio utilizes a proprietary high-resolution software system along with solenoids (a current-carrying coil of wire) that actuate each note on the piano, and two pedal solenoids. The technology was developed in a partnership between Steinway & Sons and Wayne Stahnke, a pioneer of the modern player piano system.” (Source: Steinway)

I’m actually more interested in the recording technology that enables “recording at the highest resolution possible…that captures the nuances and full range of emotion from each artist’s performance” (and that must also be able to translate the long-ago recorded works of artists no longer around to perform) than I am in the playback. (Not that the playback isn’t interesting, as Steinway has technology that can “replicate smaller increments of velocity on both the hammers and proportional pedaling.”) I’d like to know just how the system measures the hammer velocity “and the positioning of the damper lift tray and key shift.”

After a quick search for more of the technical details, all I came up with was an article from Keyboard Mag that mentions “sensors that allow minute gradations of touch, trills, pedaling and more.” And this from Wired:

“Whereas most player pianos reproduce human performances solely by recording the key strike, Steinway amassed Spirio’s catalog with a far more sophisticated system. Hardware and software embedded into the piano measured the velocity of the hammer hitting the string in 1,020 increments, taking stock of the hammer’s location and speed 800 times a second. The pedal motion was similarly documented at 200 times per second. This data created a vastly more nuanced picture of what the pianist was doing at any given time, meaning the piano’s built-in songs capture dynamics, repeating notes and the subtleties of the transition, say, from staccato to legato.”

“A software-controlled solenoid (electro-magnetic) system that’s installed underneath the piano activates the notes. Think of it this way: If you give a robot a paintbrush and tell it to paint Picasso, it might get the lines exactly right. What’s missing is pressure of the brushstroke, the depth of color, the expression that makes it art. That’s what Steinway is trying to achieve with the Spirio.”  (Source: Wired)

Somewhere along the line, I saw that Wayne Stahnke, who developed the Spirio technology for Steinway, attributes “the advent of modern high-speed FPGA technology” to making earlier player piano breakthroughs he’d worked on possible.
Well, you know me. I’d like to know the specific details on what sensors they’re using, if the Spirio uses FPGAs….

Anyway, since the price tag is $100K+, I won’t be running out and buying a Spirio any time soon. And at that price point, I won’t be tempted to do a teardown to see what’s under the lid! Still would be interesting to know just what’s going on in there.

 

What you missed if you weren’t at ARM TechCon, Part Two

Last week, we began summarizing Rich Quinnell’s EE Times article on the new technologies that caught his eye at the recent ARM TechCon Conference. That post covered the first five of Rich’s “ten most intriguing technologies.” Here we’ll take care of the second half of the list.

Private LoRa networks arm-techcon
Multi-Tech is seeing a lot of businesses that are implementing private LoRa (Long Range wireless) networks, rather than working over public ones for their IoT and Machine-to-Machine communications. To take advantage of this:

“Multi-Tech has created a ruggedized base station that supports thousands of end nodes within an area up to 10 miles in radius, and ties that local cell into the wide area network. This architecture allows users to create private LoRa networks within, say, a building or a corporate campus, then tie those networks together through the cloud. The result is the ability to deploy and manage a large number of IoT devices that are geographically diverse, in a private network.

More news: the LoRa Alliance may be “working on a roaming strategy for [its] wireless standard that will allow devices to switch from private to public networks and back again, and among carriers. Sounds interesting. (We’ll be staying tuned.)

Modular design prototyping system
ProDesign Electronic is offering its ProFPGA prototyping system. This will let developers build SoC prototypes that will enable them to test system software even before the silicon’s available. There will be multiple baseboard options (one-, two-, and four-slot)

“…into which they can plug any of several FPGA modules to build up their design. The modules can then connect together to share high-speed signals, as can up to five baseboards, to build up the resources needed to prototype a full design. Each FPGA module can also accept several of 60 different IO modules to handle this functionality without consuming FPGA resources.

This should make for faster prototyping than using a simulation.

Sensor-based IoT
Would developers be “inspired” to create sensor-based IoT devices if it were made simpler? Silicon Labs apparently thinks so. They’re introducing the Thunderboard React kit.

“In addition to the demo software, which works right out of the box, ‘the kit makes all the mobile app, cloud service, and device code available to developers’.”

The board is based on a Silicon Lab’s own Bluetooth Smart radio module, “and a collection of the company’s sensors to provide both a demonstration platform and a prototyping vehicle.”

Ultrasound creates tangible virtual objects
Apparently it wasn’t demo’d on the conference show floor, but Ultrahaptics had an offsite demo of its virtual object creation system.

“This innovative technology uses a phased array of standard ultrasonic rangefinders to create pressure at up to four arbitrary points within a conical volume above the array. By refreshing the points’ locations at speeds to 20 kHz, the array can create the illusion of a physical object in mid-air. The sensation is light (like a soap bubble bursting) but quite noticeable. The aim is give gesture control systems a physical presence to help guide user interaction. Knobs, buttons, sliders, and the like are readily created using the array, as are a variety of surface textures.

A development kit for those interested will be available after the first of the year.

The Zephyr Project aims to bring real-time to the IoT
Last but not least on Rich’s list is the Zephyr Project, an RTOS for IoT device developers.

“This open-source, community-created code is available through the Linux Foundation under the permissive Apache 2.0 license to make it as uncomplicated to use and deploy as possible. “

The aim? “’To be the Linux of microcontrollers.’”

Sounds like ARM TechCon was, as always, plenty interesting. Thanks again to Rich for sharing his findings and insights.

 

What you missed if you weren’t at ARM TechCon

arm-techconWe weren’t at the recent ARM TechCon, but are always interested in hearing about what’s new that’s ARM related. So we were very interested in Rich Quinnell’s account (on EE Times) on the ten most intriguing technologies that he saw on display there. Here’s part one of my quick summary of Rich’s article:

Ada development for the IoT
AdaCore, which has been primarily focused on aerospace and defense, is now making a move into automotive, and the Internet of Things (IoT). Ada’s strengths are in its testing, quality assurance, porting, and functionality upgrading capabilities. “Because Ada tools verify software while it is being developed, Ada projects tend to be those where software quality and reliability are paramount.” In light of security concerns around the IoT, made more pronounced y the recent major DNS event, the fact that Ada helps developers avoid the sorts of software vulnerabilities that may hamper security, it should play well for IoT apps.

Low-impact location
Comtech Telecommunications is bringing out the IoT Location Platform, which offers location info without requiring a GPS or external beacon.

“Rather than performing the location calculations on the device…a 2kB software agent simply captures information from the device’s wireless links, such as signal strength and server ID codes, and sends a short (~200 byte) message to Comtech servers, which perform the calculation needed to turn that information into a location fix. From there Comtech can provide a variety of location-based services to IoT device users, including real-time mapping of device locations.”

Resistive RAM
Crossbar has something that it calls resistive RAM (RRAM).

“The RRAM cell is a vertical resistive layer of amorphous silicon between two metal pads. The device stores data by imposing a voltage across that layer to create controlled growth of a conductive “filament” through it. The filament does not fully short between the metal layers, however, which allows the process to be reversed to eliminate the filament.”

RRAM is fast, “a considerable advantage over traditional Flash in terms of access times as well as providing energy savings by eliminating the need to block erase in order to alter a memory location.”

Managing IoT devices
Device Pilot won ARM TechCon’s Best Software Product award this year, with a SaaS offering that automates management of IoT devices. It monitors the data stream coming from the devices, and sends out an alert and/or takes remedial action if something seems “off” – like there’s no data being sent. Automating the management of IoT devices will become increasingly important as these devices proliferate.

Reducing SoC design risks
FlexLogix is primarily interested in licensing its IP cores, rather than focus on chips. So it’s built test chips that let developers prove out their designs in silicon. With their approach, FlexLogix is looking:

“… to enable SoC designers to add FPGA flexibility into their designs to help future-proof them and to reduce risk by making critical interfaces configurable to allow for changes in specifications or standards.”

Alongside the silicon technology it offers, FlexLogix also provides tools “that help simplify the design-in of the FPGAs.”

We’ll cover the next five technologies in next week’s post.

Meanwhile, a Critical Link hats off to Rich Quinnell. Reading your report on the show was the next best thing to being there.

Google’s new OS: Fuschia

pc-world-fuchsia

Recently, there’s been a lot of buzz and speculation about a newly discovered operating system (OS) project on Google Code (mirrored also at GitHub): Fuschia.  Upon inspection, and as noted in several articles, Fushcia’s core kernel services appear to be based on Magenta, which in turn sits on top of Little Kernel (LK) – a small, opensource, lightweight kernel providing basic real time OS (RTOS) services that is similar to ThreadX.

Not Linux.

My first response when I read about Fuschia was deep concern.  Then I let things simmer a bit, and I have to say, as an embedded developer supporting various System on Chip (SOC) platforms, this is really great news.

Don’t get me wrong.  I really like linux.  The code-base that it supports and sheer number of contributors for it gives you a plethora of capability once you have a working kernel and relevant device drivers for your device.  But, any embedded developer will tell you, linux is far from free (as in “free lunch”), especially if you are trying to hammer it into some of the more complex SOC’s appearing on the market today.

Trying to sort out what capabilities of a SOC peripheral an introduced linux device driver supports, how it has been tested, patching it to support your platform configuration (which is never the same as the devkit chip manufacturers promote), and pushing such patches back to the mainline can be an arduous and daunting task.  Squeezing performance out of SOC’s running linux can be a nightmare.  Most linux drivers aren’t written with zero-copy or low latency notifications to user / application space in mind.  They can’t be while complying with the intrinsic architecture linux provides.  It’s not an RTOS.

A lot of times, the device driver you need doesn’t even make it into the mainline for months or even years after basic platform support is added.  Sure, you can write a new device / kernel module to do what you need. But that means spending a lot of time understanding the inner workings of the linux kernel (e.g., dealing with device tree and the device model, the linux config/make system, kernel vs. userspace memory allocation, DMA memory mapping, the basics of linux memory page tables,  and interrupt registration) as well as the initialization subsystem (bootloading, then launching systemd or initv startup scripts).   There is a large class of embedded C / C++ application developers that still can’t do that, and they work for companies that don’t believe they can afford to train them.

Enter Fuschia.  If Google really throws its weight behind it and keeps it opensource, it’s a win for embedded device developers.  It’s an option for folks  looking for a stable, full featured RTOS that doesn’t present the barrier to entry of licensing costs, which can be significant for smaller companies that would rather invest sweat equity than cold hard cash developing their first products.  It’s another “free” choice that opens the door to a large community of coders that prefer living a little closer to the bare metal environment but desperate to use modern SOCs rolling out from chip manufacturers.

How could that be anything but good news?

 

 

The Future of Test & Measurement

Back in September, Martin Rowe had a very interesting piece on EDN entitled “Wh
at will drive test & measurement?
” In his column, Rowe combines the answers that two of his readers had given to some questions he had earlier posed on the test industry. His respondents represent two different generations – the old hands and the up-and-comers. Weighing in for the old hands was Bob Witte, VP of Technology Strategy for Keysight Technologies; for the up-and-comers, there was Shayan Ushani, a student at Bryant University who’s also an entrepreneur.

The first question Rowe asked was whether hardware has become a commodity, and software is really the only thing that matters.

Witte sees that, while hardware commoditization may be on the way, “there are still significanttest-measurement-opt measurement challenges that can only be achieved with high-performance, usually custom hardware.” Ushani believes that “software is the key to the progress of any high-tech device and, in general, the future technological revolutions. Clearly, the test and measurement industry is no exception.” I guess it’s no surprise that, as an older hand, I’m with Witte that hardware still matters.

The next question was about how the IoT will impact test and measurement, especially with respect to security. This is an especially timely question, given the recent hacking incident in which household IoT apps were hijacked to take part in major denial of service attacks. In Witte’s opinion, IoT devices are the future of data collection for T&M, and security remains an issue that has yet to be resolved. “Access to IoT data must be controlled but cannot be too cumbersome, which will impede broad deployment.” Meanwhile, because so many IoT products come in at a low price point, there’ll be pressure on test to keep cost downs.

Ushani sees that “IoT and predictive analytics will definitely play a big role in the future of test and measurement…From wearable technology to remote tracking systems, artificial neural networks will be implemented in T&M networks.”

Another question posed was “will the traditional test engineer be a job of the past?” Witte answers this one at length, concluding that test engineers will need to be building new skills in that area. So the job will change but likely not disappear.” This was followed by a question on whether engineers will need more by way of software than hardware skills. Witte writes that, “even if they don’t “write code” as a main part of their job, knowing how to create and manipulate software is an extremely valuable skill.” Ushani’s take is that “Software developers will become a norm even in the hardware arena.”

I’m not going to replicate the entire article here, but the back-and-forth is very enlightening.

Of course, no one person ever really speaks for an entire generation, but it’s interesting to see just how different the perspectives can be between an old hand and an up-and-comer!

Eye Spy

Even if we keep it relatively simple by using the same password – or variants of it – for multiple purposes, or we use some sort of single-sign on application, it’s still hard and irritating to keep track of all the pin-codes and passwords we need to keep everyday life running smoothly. Given how often we all use our phones in the course of a day, one of the most irritating is the phone password. And yet it’s one of the most important ones, given how relatively easy it is to lose a smartphone – as opposed to an IoT device attached to the wall in our home – and how much info so many of us keep on those smartphones.

Biometrics has been coming to the rescue for years to replace the old-school password approach. Most common to date are the fingerprint identification methods that some smartphones offer. Fingerprints, as it turned out, can be stolen. I read somewhere that PlayDo can even be used to do the job. Then there’s the fear that your finger could get hacked off. (Fortunately, it turns out that the technology can detect whether the finger is attached to a live body or not.) Then there’s the more reasonable fear that the fingerprint database in the sky will be hacked. And, unlike with a password, you can’t just go and change your fingerprint.

Now all eyes seem to be turning to iris-scans and/or facial recognition. And one of the technology leaders here is FotoNation, the company that – bless them – came up with the algorithms that cameras and smartphones have been using for years to eliminate “red eye” photos.

“Other algorithms exist for robust iris recognition, but according to FotoNation it has the only IP that does not need connection to cloud computing resources. Running in standalone mode [Sumat] Mehta claims it has a one in 10 million false-acceptance rate, compare to 1-in-10,000 for its closest competitor. Mehta attributes its accuracy to its use of facial tracking of both irises simultaneously.” (Source: EE Times)

Well, you can’t believe everything you read, but this does sound interesting. They don’t need the cloud because they’ve developed hardware accelerator IP that enables extremely fast locally processing.

“[FotoNation’s] algorithm, as almost all artificial intelligence (AI) algorithms today, is based on a multi-layered neural network performing deep learning on user data. And because it tracks your facial features too, you don’t have to stare at the phone to get it to work. It also will cannot be “spoofed” according to Mehta, by taking a photo of the user.”

FotoNation’s not the only player here, of course.  The Samsung Galaxy Note7 has an iris scanner. (Apple is not expected to incorporate it in the iPhone for another couple of years. But with all the problems Galaxy has been having with exploding devices, I’m not sure I’d feel all that confident holding one up to my eye for scan every time I wanted to use it!)

It will be interesting to see whether iris scanning (and facial recognition) will be incorporated in consumer applications, as they have been in security systems and other commercial/industrial settings.

ARM in Space

Like most “science kids”, I grew up interested in rockets and space exploration. I haven’t completely outgrown it – does anyone ever? – so it was not surprising that a recent blog on EDN by Rajan Bedi, Spacecraft data handling using ARM-based processors, caught my eye. (Bedi is the CEO of a consulting company called Spacechips, which focused on space electronics. Is Spacechips a great name or what?)

Bedi writes that, at some space industry FPGA conferences he had attended this year, there was much discussion about “the need for a small, low-power, high performing MCU to replace larger, more dissipative FPGAs. For localized control and processing, such as sensor TT&C [telemetry, tracking, and command] or digital control of a voltage regulator, a dedicated MCU would offer a more efficient CPU/DSP option.”

He then suggests that the answer may be close at hand, given that ARM-based chips are so ubiquitous today. How ubiquitous? Try 90 billion of them, in our phones, tablets, cars, wearables, and IoT devices. That’s ubiquity.

The ubiquitous ARM architecture offers small, low-power, high-performance cores, many of which are being used in safety-critical applications, such as car braking systems, power steering, self-driving vehicles, aircraft, medical, railway and industrial control sub-systems, conforming to fail-safe standards.

Bedi points to the proven reliability of ARM-based systems, and asks:

“…could the space industry also benefit from the performance, power, size, ease of use, and accessibility benefits of the ARM architecture? There is a huge, tried and tested ecosystem spacex_crs-10_patchavailable to enable developers to build reliable control and DSP embedded applications.”

After mentioning a few commercially available space-grade options, he goes into a full description of the use of an ARM processor to handle localized control and processing functions. Okay, it’s pretty much a free ad for Vorago’s VA 10820, a “radiation hardened ARM Cortex – M0 MCU”, but that doesn’t take much if anything away from the overall write-up. This radiation-hardened technology, by the way, will soon be getting a space shot. It’s deployed “on the Vorago’s radiation-hardened technology will soon be getting a space shot. It’s deployed “on the STP-H5 payload to be launched by SpaceX and further parts will enter orbit next year on a GEO mission as well as a LEO spacecraft.”

If you’re one of those “science kids” who grew up interested in rockets and space exploration, and went on to become an electronics engineer, you should definitely read the entire post.

More than a software company, that’s for sure

I know, I know.

There’s plenty of evidence that Microsoft is more than a software company, even if old guys like me who remember when Windows was just becoming a force, and when everyone was beginning to write memos in Word, create presentations in PowerPoint, and balance the company budget in Excel, tend to think of them that way.

What’s the evidence? Xbox. Surface. Cloud computing services.

And if I needed any more, it’s all there in a recent Wired article by Cade Metz that my colleague Alex King sent my way.

The article focuses on Microsoft’s work with the programmable chips they needed to run all their servers and networking gear. And it starts out by recounting a 2012 pitch that Microsoft’s then-Director of Client and Cloud Application (and now Distinguished Engineer) made to Microsoft’s then-CEO (and now owner of the LA Clippers). What he was pitching was a project that “would equip all of Microsoft’s servers—millions of microsoft-chipthem—with specialized chips that the company could reprogram for particular tasks.”

At his presentation, Burger:

…told Ballmer that companies like Google and Amazon were already moving in this direction. He said the world’s hardware makers wouldn’t provide what Microsoft needed to run its online services. He said that Microsoft would fall behind if it didn’t build its own hardware.

The upshot was that Microsoft began working on programmable chips that rely on FPGAs. They’re already what’s behind Bing, they drive Microsoft’s cloud computing services, and they’re becoming part of the company’s work on neural networks. FPGAs are, in Burger’s words, “now Microsoft’s standard, worldwide architecture.”

Why FPGAs?

All those sophisticated services – Bing, neural networks – are making demands on processors that can’t be accommodated by just adding more CPUs to the mix.

But on the other hand, it’s generally too expensive to create specialized, purpose-built chips for every new problem. FPGAs bridge the gap. They let engineers build chips that are faster and less energy-hungry than an assembly-line, general-purpose CPU, but customizable so they handle the new problems of ever-shifting technologies and business models.

At Critical Link, we’re rather partial to FPGAs ourselves, and incorporate them in a number of our SoMs (sometimes as an option). Microsoft uses FPGAs from Altera (which last year was acquired by Intel). Although admittedly on a smaller scale than Microsoft, we also work with Altera. We’re a member of the Altera Design Services Network, and our MitySOM-5CSX features the Altera Cyclone V SX-U672, which combines FPGA logic and a dual-core ARM Cortex-A9 processor subsystem.

Anyway, the full article is quite detailed and interesting, and is definitely worth a read.

But rather than leave you with that suggestion, I thought I’d close with one of the issues that Microsoft had to address when they started messing around with chips:

…[Microsoft] certainly didn’t have the tools and the engineers needed to program computer chips—a task that’s difficult, time consuming, expensive, and kind of weird. Microsoft programming computer chips was like Coca Cola making shark fin soup.

We’ve never thought of it as weird, but maybe that’s because we’ve been programming computer chips for such a long time!

Once again, we revisit PTC

Once again, there’s been a train accident which, if there’d been Positive Train Control (PTC) in use, might have been avoided.

So once again, we’re revisiting PTC.

Our first post on the subject, in December 2013, was in response to a commuter train crash in the Bronx in which four people were killed. Then there was the May 2015 Amtrak derailment in Philadelphia – 8 killed, 200 injured. We posted about that here. The National Transportation Safety Board (NTSB) investigation found that PTC would likely have prevented both of these incidhoboken-train-stationents.

Which brings us to the most recent accident where having PTC in place might have resulted in an outcome different than 1 dead and over 100 injured in a commuter rail accident in Hoboken NJ.

When we first wrote about PTC, it was supposed to have been implemented by the end of 2015. Under pressure from the railroad industry, this was pushed back to the end of 2018. Too late for those killed in Philadelphia and Hoboken. And, as it turned out, NJ Transit had already been granted a PTC waiver from the Federal Railroad Administration (FRA), as long as they installed something comparable. As of this writing, the NTSB haven’t determined whether any speed control mechanisms were installed in Hoboken. Whatever answer they find, those mechanisms were insufficient to do the job.

PTC is an advanced system that:

“…uses communication-based/processor-based train control technology that provides a system capable of reliably and functionally preventing train-to-train collisions, overspeed derailments, incursions into established work zone limits, and the movement of a train through a main line switch in the wrong position.” (Source: FRA)

In any case, given the waiver, Hoboken would not have had PTC installed. What would have been in place is something called Positive Train Stop (PTS), which is a less advanced precursor to PTC. Would PTS have prevented the Hoboken crash? Would PTC? Too soon to know.

At Critical Link, we have a special interest in PTC, as we have worked on this valuable and life-saving technology. When we wrote about PTC in 2015, here’s a bit of what we had to say:

We are proud of the role we play in this important technology. Railroad systems are complex. So is PTC: it can’t be implemented overnight, so it’s understandable that it takes time to get all the different segments of a system up and running.  But PTC can save lives. If something good can come out of the Philadelphia derailment, it may be that we take another hard look at this important technology.

Substitute Hoboken for Philadelphia and we’re still hoping that there’ll be more attention paid to PTC.

And nearly three years after we first wrote about PTC, I find that we need to conclude this post the same way we did then:

When PTC becomes more widespread, it’s safe to say that rail travel will become safer – something we all want to happen as soon as possible.

I hope that the next time we revisit PTC it won’t be because, yet again, there was a fatal accident that PTC might have prevented.