Skip to main content

Autonomous Vehicles: Myths according to Philip Koopman

As regular Critical Link blog readers know, I’m both a car guy and a technology guy. So I am naturally drawn to articles at the intersection or car and tech. Thus, my interest in autonomous vehicles (AVs). EE Times recently had a provocative article by Carnegie Mellon associate professor Philip Koopman, whose specialty is AV safety. While Koopman definitely sees the potential benefits of AVs, he argues that AV technology is still “immature”, and that AV industry campaigning to keep regulatory oversight at a minimum is wrong-headed. This campaign, Koopman says, is based on what he calls “The Dirty Dozen” myths about AVs – a list of myths he lays out and debunks.

Using Koopman’s myth statements in full, I’ll briefly summarize his points, but urge you to read the full piece. Even if you don’t agree entirely with his premise that we need more AV regulation, not less, overall his points hold a great deal of validity.

Myth #1: 94 percent of crashes are due to human driver error, so AVs will be safer. The gist Koopman’s argument is that this misrepresents the original findings of a study that “94 percent of the time a human driver might have helped avoid a bad outcome.” Which is not exactly the same as accidents caused by the driver. Drivers do make mistakes; so will AV. It’s just that their mistakes will be different. AVs won’t be texting while driving, but there’s plenty that can go wrong.

Myth #2: You can have either innovation or regulation – not both. In fact, Koopman argues, you can have both. And that regulations can be written that explicitly allow for innovation. Even if the industry just sticks to its own generally agreed upon standards for safety, training, and testing, there’ll be plenty of room to innovate.

Myth #3: There are already sufficient regulations in place. Other than in NYC – where there are an awful lot of cars on the streets and pedestrians in the crosswalks – Koopman says there are precious few regulations. When it comes to safety, regulation equates to little more than taking the manufacturer’s word for it.” Gulp!

Myth #4: We don’t need proactive AV regulation because of existing regulations and pressure from liability exposure. Not true, Koopman says. The regulations covered under today’s Federal Motor Vehicle Safety Standards are principally focused on basic safety functions – think headlights and seat belts – and do not address computer-based system safety. Which is pretty much what AVs are about. With respect to liability exposure, Koopman’s wary that companies with deep pockets for development may not mind making a few payouts (however large) to the families of those killed by an AV.

Myth #5: Existing safety standards aren’t appropriate because (pick one or more):

    • They are not a perfectfit;
    • No single standard applies to the whole vehicle;
    • They would reduce safety because they prevent the developer from doing more;
    • They would force the AV to be less safe;
    • They were not written specificallyfor AVs.

Hogwash, according to Koopman. There are existing safety standards that are flexible, and which cover AVs. Nothing in these standards would keep a developer from doing more than required and none of them would render AVs less safe (an argument Koopman finds “laughable”).

Myth #6: Local and state regulations need to be stopped to avoid a “patchwork” of regulations that inhibits innovation. These “patchwork” regulations exist in large part because AV companies pressure the state’s they’re working in to go easy on regulations, threatening to pack up and take their jobs to a more accommodating state. “Moving to regulation based on industry standards would help the situation. A federal regulation that prevents states from acting but does not itself ensure safety would make things worse.”

Myth #7: We conform to the “spirit” of some standard. Spirit of the law vs. letter of the law arguments may seem like hairsplitting, but this makes Koopman’s point pretty clearly:

Consider whether you would ride in an autonomous airplane in which the manufacturer said: “We conform to the spirit of the aviation safety standards, but we’re very smart and our airplane is very special, so we took liberties. Trust us – everything will be fine.”

Myth #8: Government regulators aren’t smart enough about the technology to regulate it. There is no denying that the technology underpinning AVs is complex. It’s not really a matter of smart vs. not so smart. It comes down to technology that is rapidly changing and requires tremendous expertise to truly understand. But Koopman points out the plan being proposed by the U.S. Department of Transportation invokes the standards that the AV industry has already agreed to. Which sounds smart enough to me.

Myth #9: Disclosing testing data gives away the secret sauce for autonomy. Nope, Koopman says. Reporting testing data won’t disclose a company’s intellectual property.

Myth #10: Delaying deployment of AVs is tantamount to killing people. So far, the safety benefits of AVs are all in the long-promised future. And so far, there’s nothing to prove that AVs will be safer than cars driven by humans, especially given that those human-driven vehicles are getting safer thanks to the growing use of active safety features (e.g., automated emergency braking).

Myth #11: We haven’t killed anyone, so that must mean we are safe. Hmmmmm. I just googled and it seems that there have been at least a handful of deaths associated with AVs. Not to mention that ‘so far, so good’ doesn’t equate to perfect safety.

Myth #12: Other states/cities let us test without any restrictions, so you should too. Road testing is important, and no one’s going to trust AVs until and unless they’re fully road-tested. But just because one jurisdiction allows for it, that doesn’t make it safe to do so. Here, Koopman gets back to his central theme that the AV industry really just has to adhere to its own agreed to safety standards and not put innocent drivers and pedestrians, caught up unwittingly in someone else’s experiment, at risk.

As disappointing as it’s been that AVs aren’t further along by this point, I’d hate to see things rushed to the point that there are enough public failures that the risk seems to start outweighing the rewards of autonomous vehicles. This could result in a backlash – and some over-reacting, over-regulation – that could stymie advances for years to come. I’d hate to see that happen. Sounds like some regulation is in order.

Do we really need men’s cologne to be connected?

I’ve never been all that much of a cologne guy, but I do remember as a kid seeing ads that were either a bit edgy and suggestive, or a bit funny and suggestive. And there were a lot of those ads on TV and in print back in the day. The main one in the funny/suggestive category was for Hai Karate. In the edgy/suggestive category, one that comes to mind was for Paco Rabanne.

Fast forward a few decades, and I haven’t given all that much thought to Paco Rabanne. And if I had given it much thought, that thought would probably have been that Paco Rabanne cologne was no longer around.

Turns out, Paco Rabanne is still around. And, amazingly (to me at least), it’s gone high tech.

They’ve come out with a new men’s fragrance. Phantom comes in a bottle that looks like a robot (which tells me that, even though they’re billing the robot as a “wingman”, they’ve pretty much moved away from edgy/suggestive). And in the robot’s head (i.e., the cap), they’ve embedded an STMicroelectronics NFC (near-field communication) tag chip, a ST25TV, from ST Microelectronics, a major European semiconductor chip maker.

A number of different organizations worked on this new product, and there is some cool technology at play:

Together, the partners worked out how to embed a general-purpose, NFC-certified Type 5 tag IC for maximum operating volume and range along with a custom small antenna into a space-constrained perfume bottle cap where the parasitic effects of the shiny chrome-metal finishing can wreak havoc on connectivity. Using NFC eliminates the need for a battery in the cap, as the tag gets powered by the contactless fields between the tag IC and the connecting device —typically a smartphone, or tablet.

…The ST25TV is part of ST’s NFC/RFID tag IC series, which offers specific modes to protect tag access, including kill and untraceable modes, 64-bit encrypted password with a failed attempt counter to protect read/write access to user memory, and a digital signature that can be used to prove the origin of the chip in cloning detection. A tamper detection option is also available. The ST25TV tag ICs also contain a configurable EEPROM with 60-year data retention and can be operated from a 13.56 MHz long-range RFID reader or an NFC phone.

The contactless interface on the ST25TV devices is compatible with the ISO/IEC 15693 standard and NFC Forum Type 5 tag. The ST25TV is NFC Forum certified, which ensures interoperability with all NFC-enabled smartphones. (Source: Embedded.com)

The fragrance itself was developed using technology, using AI and neuroscience to pick out the right combination of ingredients. But it’s the technology used to create the first “connected fragrance” that’s getting the attention.

I’m always proud – justifiably so, IMHO – of the remarkable things our industry contributes to the betterment of mankind in so many arenas: healthcare, communications, transportation, entertainment…But some of the uses of technology just strike me as a bit silly. I know that Paco Rabanne is trying to engage their customers, and build loyalty by offering extras like access to those playlists. It’s a marketing thing, not a technology thing. Seriously, though, do we really need a men’s cologne to be connected?

Technology that’s strictly for the birds

A couple of weeks ago, a friend sent me an article she’d seen in The New Yorker on the technology used in birding. When I think of birding, I picture someone with binoculars around their neck, a field guide to birds in their hand, and a birdcall in their pocket. As with so many aspects of life, technology now plays a big role, and when it comes to birding, well, there’s an app for that.

The app mentioned in The New Yorker piece is Merlin, a free application developed by the Cornell Lab of Ornithology.

When it was first launched, Merlin – which taps the hundreds of millions of records on bird sightings held in their eBird system – was an AI-based search system. It asked the user for the date and location of a sighting, along with physical descriptors (color size) and behavior. The app then offers the birder a shortlist of possibilities. If there’s a hit, the birder clicks on the “That’s My Bird” button, and Merlin learns from it.

Several years after launching Merlin in 2014, Cornell came out with Merlin Photo ID. Developed in collaboration with the computational vision lab at Caltech, and Cornell’s computer vision group, Photo ID lets users send in a picture of a bird and, using computer vision technology, makes the identification.

Sometimes, of course, birds aren’t seen. They’re heard. So Cornell created Merlin Sound ID, which was released early this summer. Sound ID, like its Photo ID companion app, uses computer vision tools in the identification process

When the app records a bird call, it generates a spectrogram, which looks like a tracing made by a seismograph during an earthquake. Different species’ spectrograms aren’t as individual as fingerprints, but almost. Sound ID is powered by an artificial-intelligence algorithm that bird-identification experts and lab staff have trained by feeding it thousands of spectrograms submitted by birders (through ebird.org, one of the lab’s sites) and annotated by bird-sound experts. (Source: The New Yorker – may require a subscription.)

The overall Merlin app is something of a field guide. It has content – photos, sounds, and other info – on over 7,500 different bird species. These are localized, so if you’re birding in Ecuador or New Zealand or a lot of other places, you’ve got a bird pack just for you. (They don’t have a ton of technical info on their website – not much more than was found in the article – but the Cornell Ornithology Lab looks like an interesting place. If you’re a birder, it’s definitely worth a virtual visit.)

Organs-on-a-Chip

You may have seen the recent news about Colossal, a recently-funded Boston startup that’s reengineering the Asian elephant, turning it into a neo-woolly mammoth and reintroducing it to the Arctic, from whence it disappeared 10,000 years ago. Colossal is an offspring of Harvard’s Wyss Institute for Biologically Inspired Engineering. The re-creation of the woolly mammoth may seem fanciful, but it will help save the endangered Asian elephant species and it’s hoped that once the woolly mammoth is back in the Arctic, it will have a positive impact on global warming by allowing for the restoration of grasses that are better reflectors of sunlight than the conifers that replaced them once the woolies disappeared.

Whether the woolly mammoth project turns out to be more fanciful than useful, there’s another Boston-based company spinning out of the Wyss Institute that’s also recently received a hefty investment infusion. And their work is decidedly non-fanciful.

Emulate develops clear plastic chips that can simulate human organs. These are used to test out drugs without having to use animal guinea pigs.

Emulate says that its chips enable researchers to replicate and study human biology and disease with greater precision and detail than is possible with conventional cell cultures or laboratory animals. (Source: Boston Globe)

The organs that Emulate is putting on chips are the brain, colon/intestine, duodenum/intestine, kidney, liver, and lung. The application areas that the chips are used in include cancer, inflammation, infectious disease, and ADME-TOX (checking for Absorption, Distribution, Metabolism, Excretion, and Toxicity).

Emulate’s offering includes instruments and accessories that helps make the organ chips easier to use and yield more reliable data; development kits; and software to help design studies and analyze results. The goal: making drug development faster, better, and cheaper. Here’s hoping they can achieve all three!

Streamlining drug development is greatly needed. According to the Wyss Institute, it can take a decade and over $3 billion to take a drug from lab to approval. And, because animals are an imperfect model of humans, there are times when drugs that are safe for use on animals don’t work so well on humans. Once they get to clinical trials, drugs may prove less effective, or even cause harm. Organs-on-a-chip replicate true human physiology. Deploying them will undoubtedly accelerate drug development.

Here’s how the Wyss Institute describes things:

“Organs-on-Chips” (Organ Chips) [are] microfluidic culture devices that recapitulate the complex structures and functions of living human organs. These microdevices are composed of a clear flexible polymer about the size of a USB memory stick that contains hollow microfluidic channels lined with living human organ cells and human blood vessel cells. These living, three-dimensional cross-sections of human organs provide a window into their inner workings and the effects that drugs can have on them, without involving humans or animals. (Source: Wyss Institute)

This chip doesn’t resemble the chips I’m familiar with, but I find them completely fascinating. And also heartening. So many of us have lost loved ones to devastating diseases. Not to get overly personal, but my mother died of cancer, my father of ALS. Technology that can speed up drug development, and make personalized treatments more possible – as organ chips promise to do – is so urgently needed, and so welcome. I’ve said it before, and I’ll say it again: I’m immensely proud to be part of an industry that does so much good for humankind.

Nanotech in healthcare. (And no, I don’t want to live forever.)

I’m old enough to remember when the latest and greatest computing technology was – well, there’s really no other word for it – clunky. Oh, maybe not computers as big as a barn that couldn’t do all that much, but so-called portables that weighed a ton (or seemed like it, if you were lugging one); paper-based terminals that connected to mainframes at speeds that rivaled the Pony Express; and floppy disks that actually were floppy. So I’m continually marveling at how things have gotten more compact (not to mention faster and higher performing). When I stop to consider what we’re now able to pack onto a board and compare it to just a few years back, I sometimes shake my head in wonderment.

And when I think about nanotech – matter used on an atomic, molecular, and supramolecular scale – I’m in total awe.

How small is small? A device working on the nanoscale can be “up to 100,000 times smaller than the width of a human hair.” Now, I’m not about to pull out one of my remaining human hairs to see if I can gauge the magnitude, but let’s just say that this is astonishingly small.

Small, but plenty powerful.

One such material composed of carbon atoms is 100 times stronger than steel but six times lighter. Other metamaterials, such as quantum dots, can produce far more power than conventional solar or electrical cells despite being zero-dimensional. (Source: ZME Science)

In the piece cited here, medical writer Michelle Peterson provides a detailed look at nanorobotics – and how they’ll be used in the not-so-distant future in healthcare.

After identifying four types of nanorobotics – too complex to get into here, but worth the read if you’re interested – Peterson gets into the applications. Although there are many potential use cases (military, industrial, environmental) for nanorobotics, the principal focus to date has been in the health industry.

Regarding medical applications, functions such as healing wounds, atomic-scale surgical equipment, and traversing through the body to find and treat ailments are most commonly theorized.

For cancer treatment, it’s anticipated that we may be on our way to being able to detect cancer “on a single-cell level.” With health-sensing technology, biopsies – painful and invasive – may become a thing of the past. And treatments can be targeted to specific cells, not the shotgun approach that most chemotherapy still relies on.

What else might nanorobotics be used for. For one, healing wounds by stopping bleeding and regenerating tissue. Someday, perhaps tackling dementia. Nanotechnology will increasingly be used in surgery. “Nanodevices that can traverse the blood-brain barrier, [will bypass] the need for clumsy electrodes or invasive brain surgery.” Then there’s improved ability to control bionic limbs.

Futurist Ray Kurzweil believes that nanotechnology will someday outsmart death and let people live forever. (I don’t know about that. Lots to think about the ethical implications of deciding who gets to be immortal, and what resources they’ll consume at the expense of new generations.)

None of this is happening immediately. As Peterson points out:

…nature is a highly evolved system developed over billions of years, making the synthesis of unnatural nanoscale devices painfully slow and difficult.

Still, it’s worth keeping an eye on emerging nanotech as it takes on so many seemingly insurmountable health problems. And we know for one thing, it won’t be clunky at all!

Tesla Tech News Is Always Interesting

I’m always interested in automotive-related tech, so I’m always interested in reading about what Tesla is up to.

Much of the news of late has been about the Tesla Cybertruck, which was announced a while back to great fanfare. More recently, they’ve announced a release date – it goes into full production in 2022 – and have been taking orders. So far, they’ve received 1.25 million pre-orders for an electric truck that sort of reminds me of a Batmobile.

I don’t have a lot to say about all the tech in there, but this caught my eye:

…the optional Integrated Tonneau Cover, is going to have a twist – it will also be a solar panel that provides electricity when deployed, or rather a series of 110 separate solar cells. According to a patent filed by Tesla with the US Patent Office,

When the tonneau cover is deployed to cover the bed and the solar electric cells that make up the slats are facing the sun, the battery within the electric vehicle can be charged by solar electric cells.

When the truck bed cover is in a closed position, it is configured to recharge the battery pack, the closed position of the truck bed cover enabling the plurality of solar electric cells to face a sun. (Source: InsideEVs)

From what I know about solar-powered vehicles, they have to sit in the sun for an awful long time to translate into any appreciable mileage. Which is not to say that it won’t be viable at some point.

What I found more intriguing was that Tesla was recently granted a patent for its ‘Improper Seatbelt Usage Detection’ (ISUD). There are already, of course, buckle-your-seat-belt warning systems in cars and trucks. But it’s easy enough to fake that you’re using your seatbelt, and the ISUD will solve that problem:

The ISUD system will rely on sensors built into the seats and the belt itself, thus making the system considerably harder to fool. When it will detect improper seatbelt use:

A warning may be a text message displayed on display system of vehicle infotainment system, or an alarm sounding on vehicle infotainment system, a text message to registered mobile number of occupant, etc. (Source: InsideEVs; the second paragraph is from the patent filing.)

The system may also perform other actions to help make sure people are using seatbelts correctly.

And then there’s this bit of news:

Tesla is dropping the forward radar sensor from the Model 3 and Model Y, beginning the rollout of what the company calls Tesla Vision, which will make Tesla’s driver-assist systems rely almost completely on the car’s cameras. CEO Elon Musk has been hyping “pure vision” for months now and seems confident that it will be able to eventually get Tesla to Level 5 autonomy. (Source: Car and Driver)

At present, Tesla’s Autopilot system deploys a combination of cameras, ultrasonic sensors, and forward radar:

The new cars will rely mostly on the car’s cameras and its computer’s processing ability to use Autopilot and the suite of features currently included in the Full Self-Driving (FSD) package. Other automakers use radar for their adaptive cruise control systems, and they benefit by being able to operate in inclement weather and direct sunlight.

When the new “vision-only” system is used, some existing features will no longer be available. One is Smart Summon, which lets an owner get their car out of a tight parking space without having to get in the car. Another is Autosteer, which is used to keep your car in its lane. That feature will be limited to speeds of 75 mph, down from the prior 90 mph. (Not that anyone needs to be driving 90 mph…) These features will be restored over time.

In case you’ve forgotten, that Level 5 autonomy that “pure vision” is enabling is ‘no driver required at all.’ It really will be coming at some point.

We’ve come a long way since Smokey the Bear

It’s been hot here this summer. It’s been rainy. But the forest fires have certainly been a part of our weather system.

There are days when we can see it over Syracuse: smoke that had wafted from the forest fires in the West and turns the skies in Upstate New York hazy. Some days, you can smell the smoke. Sometimes, the late afternoon/early evening sun, like the old song says, is “shining like a red rubber ball.” These red suns are pretty spectacular, but that flaming color is caused by smoke particles.

As a kid growing up, I heard plenty from Smokey the Bear, warning us that “Only you can prevent forest fires.” (In fact, that “only you” is not quite correct. While it’s true that many fires are started by carelessly tossed matches or poorly put out campfires, there are many climate and other natural forces, like lightning strikes, at play, as well.)

Smokey the Bear wasn’t just about fire prevention. According to the lyrics to the Smokey the Bear song, Smokey was always prowlin’ and growlin’ and sniffing the air. And because of all this sniffing, “He can spot a fire before it starts to flame.”

In real life, I suspect that there have been few if any forest fires detected by cartoon bears. Historically, most forest fires were detected by rangers in lookout towers, and in our national forests these towers are still staffed. But technology is being increasingly deployed for fire-spotting. Some of the systems being used are satellite-based, or are optical solutions that search for smoke. Emerging systems are sensor and IoT based.

EE Times had a recent article that showcased a new “early warning” IoT solution. Dryad Networks, one of a number of companies with sensor-based forest fire detection systems, is bringing out a “solar-powered LoRaWAN-based sensor system for early detection.” Rather than having each base station connected to the Internet, a mesh network infrastructure lets them communicate with each other, with one station relaying a message on to the next before it gets to an Internet-connected gateway. Sensors connect to the gateways through LTE-M or “Ethernet connectivity to add a Starlink satellite dish.”

The sensors are batteryless and have no user-serviceable parts…The sensors integrate a Bosch BME688 gas sensor chip that detects the gas composition of the air — it looks at hydrogen, carbon dioxide, carbon monoxide. Then the Dryad sensor uses machine learning and edge processing to detect the combination of gases typical for a wildfire.

The goal here is to detect a wildfire in its early, still-smoldering phase – in under an hour.

Other solutions incorporate robotics and machine learning.

Wildfire detection systems are still in their early stages, and it’s not clear how well they’ll stand up to flames. Still, it’s clear that technology can and will play a valuable role that will become more and more important as severe wildfires become more widespread and routine. Wildfire detection is definitely going high tech – well beyond anything that could have been imagined by this kid when he was singing along to Smokey the Bear all these years ago. Still, we shouldn’t ignore Smokey’s advice. If we’re out in the woods, we need to be careful. We may not be the only ones who can prevent forest fires, but we still need to do our part.

Rock-a-Bye Baby, brought to you by the Internet of Things

As most everyone who’s experienced life with a newborn can attest, there will be times when you’re sleep deprived. There’s no getting around some of this: newborns need to eat every few hours, and it’s not like they can raid the fridge and see what’s there to snack on. While all babies have to eat, some babies, for any number of reasons, are up every hour or so, and require some parental attention before they can get soothed back into the Land of Nod. It’s been a long while – my three kids are all grown – but I can still remember how excited my wife and I were when each of our babies almost, sorta, kinda slept through the night.

And now there’s the Snoo, a $1,500 AI-robotic bassinet that helps babies (and their folks) get some much-needed sleep. The Snoo wasn’t around when I was a new dad, but – although I’m just reading about it now (no grandkids yet!) – it’s been available for a few years now.

It starts off with the first principle that a baby should be sleeping on its back, not face-down of on its side, which were the practices in earlier generations that, in fact, lead to many SIDS deaths. A Snoo baby is swaddled – another practice that’s recommended for newborns – in a little Snoo sack that has “wings” with clips that attach it to the sides of the bassinet. So: no rollovers.

Then there’s the technology.

The Snoo is equipped with sensors that detect when your baby is fussing. It responds by rocking the cradle and playing some white noise. If the baby continues to fuss, the levels of motion and white noise scale up. If the Snoo decides that its approach isn’t working, it sends an alert to the parents’ smartphone, so they can come and check on the baby, see if it’s hungry, or needs a diaper change, or just a real live human parental being to rub its tummy for a bit. The Snoo has protections built in that keep the motor from rocking the bassinet too aggressively. And there are software controls on volume maximums that limit the white noise volume. If you’re worried about hackers deciding to rock your baby from afar, Wi-Fi can be disabled.

Reading about the Snoo reminded me that, a few years ago, smart diapers were introduced. These (needless to say expensive) diapers were equipped with an embedded moisture sensor that sent out an alert when it detected dampness. I don’t know whether smart diapers took off at all. It sure seemed to me like an application looking for a reason for being. Certainly in my experience, babies have a pretty good way of signaling that they’re uncomfortably wet: they cry.

A baby stays in the Snoo until they’re six months old, the age when babies typically graduate to a crib. Snoo babies don’t go cold turkey. There’s a transition period when the babies learn to rely on their own self-soothing, rather than on the Snoo.

Anyway, while I may think that smart diapers were frivolous, I have a different take on the Snoo.

Sure, it’s expensive: $1,500 for a bassinet that’s going to be used for six months is a lot of money. Even if you hang on to it for later kids, it’s still a hefty original outlay, especially when you consider how many other items new parents need to acquire. (There are rental options for Snoos, as well as a secondary market on Craigslist.) But for those who can afford it, something that not only helps keep babies safe, but also gives parents a bit more sleep when they so desperately need it, seems like a good thing. Especially if you have a fussy baby – is the word “colicky” still used? – it sounds like a godsend.

That’s my initial opinion, anyway. I guess we’ll really find out what I think about it when we start having grandkids!

 

——————————————————————-

Sources for this post: Washington Post and Wired.

Chip Design via AI

Google is always up to something interesting, and one of the interesting things they’ve been up to is using Artificial Intelligence (AI) to automate the chip design process. The first place they’re deploying their new model is to design their next-gen tensor processing units (TPUs). These are the processors, used in Google’s data centers, that are tasked with increasing the performance of AI apps. So, AI deployed to help accelerate AI.

The [Google] researchers used a dataset of 10,000 chip layouts to feed a machine-learning model, which was then trained with reinforcement learning. It emerged that in only six hours, the model could generate a design that optimizes the placement of different components on the chip, to create a final layout that satisfies operational requirements such as processing speed and power efficiency. (Source: ZDNet)

Six hours, eh? That’s fast!

The specific task that Google’s algorithms tackled is known as “floorplanning.” This usually requires human designers who work with the aid of computer tools to find the optimal layout on a silicon die for a chip’s sub-systems. These components include things like CPUs, GPUs, and memory cores, which are connected together using tens of kilometers of minuscule wiring. Deciding where to place each component on a die affects the eventual speed and efficiency of the chip. And, given both the scale of chip manufacture and computational cycles, nanometer-changes in placement can end up having huge effects. (Source: The Verge)

This is the sort of work that could take engineers months to accomplish when done manually.

Optimizing chip layouts is a complex, intricate process. Processors contain millions of logic gates (standard cells), and macro (memory) blocks in the thousands. The “floorplanning” process that decides where to put the standard cells and macro blocks is critical, impacting how rapidly signals can be transmitted. Figuring out where to put the macro blocks comes first, and there are trillions+ of possibilities.  Google researchers state that “there are a potential ten to the power of 2,500 different configurations to put to the test.” And given that Moore’s Law still seems to be with us – you remember Moore’s Law: the number of transistors on a chip doubles every year – there are ever more combinations to worry about.

Obviously, no one’s putting trillions of configurations to the test. Engineers rely on experience and expertise to create their floorplans. But AI can evaluate many different options, and no doubt come up with ones that even the best engineers might have missed.

Once the macro blocks are in place, the standard cells and wiring are added. Then there’s the inevitable revising and adjusting iterations. Using AI in this process is going to free up engineers to focus on more custom work, rather than having to spend their time on how to take care of component assembly.

The acceleration of chip design is not going to immediately solve the chip shortage crisis, which is at the fab rather than the design level. Still, over time, if next gen chips can be designed faster, it should have positive impacts throughout the supply chain.

One of the most fascinating revelations was that the floorplan created by AI (that’s “b”, to the right) looks more random and scattershot than the very neat and orderly layout (“a” on the left) created by a human engineer. (This illustration is from Nature, which published a paper on the Google AI work.)

Inevitably, when we see AI being deployed, we ask ourselves whether AI, robots, machine learning will replace us humans.

Personally, I’m pretty sure that human engineers are still good for a while. There will always be work for engineers in a world that increasingly relies on technology in just about every aspect of our lives. There’s no denying that AI is going to take on some of the tasks that traditionally have been in human hands, but who knows what new opportunities this will create for knowledgeable and highly-skilled engineers like the ones we have here at Critical Link. Oh, brave new world!

“The science behind the boom”

The past year has been a tough one. Most of us very likely know people who’ve had covid; many have lost family members, friends, colleagues. Most of us have very likely seen small businesses – shops and restaurants – close permanently. Many know folks who lost their jobs.

We’ve all been missing out a lot of things: getting together with family and friends; celebrations great and small; going to church. And we’ve all been missing out on the community gatherings that we may not even be aware of how much we enjoyed them – until they’re gone. One of which is Fourth of July fireworks, which many cities and towns had to cancel last year.

Fortunately, in many places, fireworks are back.

And engineer that I am, this makes me think of the technology that goes into those fireworks displays that light up the July sky. I posted about celebrating the Fourth a few years back, but I thought it was a topic that’s worth revisiting. So – thanks to an article by Kevin Clemens I found on DesignNews –   here goes with Kevin’s list of “interesting fireworks facts to help you understand the science behind the boom.”

In the beginning, there and here

The Chinese get credit for inventing fireworks, sometime around the second century B.C. Those first firecrackers were bamboo stalks. To get a bang out of them, you tossed them in a fire. Once the hollow air pockets heated up: BANG!

But it took another 1,000 years or so before “modern” fireworks came about. That’s when the Chinese concocted gunpowder by combining potassium nitrate, sulfur, and charcoal. And by stuffing the gunpowder into the bamboo stalks, we had fireworks.

By the Middle Ages, fireworks had made their way to Europe, and the Italians – I am proud to say – who were the first Europeans to produce fireworks.

Although July 4th wasn’t made a federal holiday until 1941 – something I did not know – we’ve been celebrating it since 1776, when our Continental Congress decided to declare our independence.

Now, on to the science and technology

Launch: There’s nothing new about some of the science and technology. The firework shell is set off from a mortar, which is nothing more than a short steel pipe. A black powder charge, the lifting charge, in the pipe is set off and thrusts the shell up and out. The lifting charge also lights the shell fuse. When the shell gets to its launch altitude, there’s a bursting charge that blows the shell up. (This is how it’s generally done. In some cases, although it’s more expensive, a rocket is used to launch the shell.)

Colors: The colors are produced by combustible metal salts: the most commonly used are “strontium carbonate (red), calcium chloride (orange), sodium nitrate (yellow), barium chloride (green) and copper chloride (blue).” The most common metal salt – sodium chloride, i.e., table salt – won’t work because it “absorbs water and would produce a fizzle rather than a sizzle.” And while red is a very popular color for fireworks, reds – when color is combined with pyrotechnic compounds – produce carcinogenic “junk” that drifts back down to earth. (Researchers have found a non-carcinogenic, “environmentally-friendly” approach. Just to be on the safe side, you might want to make sure your blanket and lawn chairs are upwind from the junk!)

Patterns: Remember running around the backyard in your pj’s writing your name with a sparkler? Those stars you see when a shell bursts may well be pieces of sparkler compound. Those starry night fireworks are the most common ones. More sophisticated designs are created by having different colored sections that ignite at different times, using separate fuses. Patterns are also created using star pellets. Explosive charges are deployed to make the outlines grow larger so you can see what they’re supposed to be.

The physics behind it:

Calculating the trajectory for a fireworks projectile, launched from a mortar tube is a straightforward two-dimension kinematics problem. In this case the initial launch velocity is positive, the acceleration due to gravity is negative, and the maximum altitude depend only upon the vertical component of the initial velocity (neglecting air resistance on the shell). The horizontal velocity (moving downrange) is a constant (neglecting air resistance), and the position depends upon the time of flight.

Show choreography: Back in the day, fireworks shows were choreographed by hand. These days, there are design programs and simulation software that let designers see what their displays will look like. And when the show goes on:

Prior to the invention of “e-match” the pyrotechnics were fired by hand using manual switch panels. But now, once the show has been fully programmed, script files can be exported to the firing system to make sure that each launch and explosion is timed to perfection. Better and safer shows are the result.

Environmental and safety concerns:  Red fireworks aren’t the only environmental (and health hazard) problem. To avoid fires being set when residual matter drifts to the ground, many fireworks displays occur near water. That means that perchlorates produced will drift into water, “contaminating rivers, lakes and drinking water.” Air quality can also be temporarily impacted by fireworks displays. Cleaner fireworks are being introduced.

In terms of your safety as you sit there watching the fireworks, thanks to the professionals running the fireworks show, audience injuries are rare-to-nonexistent. The fireworks people buy (legally or illegally) and set off in their backyards are a different story. Thousands of consumer injuries are recorded each year. So be careful out there.

If you’re a fireworks fan, here’s hoping that things are back to normal where you live.

Whether you’ll be celebrating the Fourth of July with the local fireworks display, and/or firing up the grill in the backyard, and/or just breathing a sigh of relief that the worst of the pandemic seems to be behind us, I hope that you all enjoy a Glorious Fourth.