Skip to main content

Security in the IoT Era

In this, our third post on EETimes series on trends around Embedded in the IoT Era, I’ll be summarizing some of the articles devoted to security. (The other categories they consider the most critical ones facing designers are connectivity and lower-power design, and edge intelligence.)

In his article, What’s Driving the Shift from Software to Hardware in IoT Security, Majeed Ahmad notes that, with the tremendous growth of IoT devices and applications, there’s a parallel growth in interest in security. And much of that growth is coming on the hardware side, with technologies that include “secure elements, hardware security modules (HSMs), and physically unclonable function (PUF) capabilities.” Why use the hardware approach rather than rely on software to provide security?

Ahmad quotes Michela Menting of ABI Research, who says:

“Hardware-based security offers better protection from manipulation and interference than its software-based counterpart because it’s more difficult to alter or attack the physical device or data entry points,”

And the ability to withstand attacks takes on more and more importance as more and more IoT devices and applications are literally life and death (someone taking control of an embedded medical device, or control of a car) or have the potential to result in widespread, serious economic and physical damage if assaulted (the electricity grid).

Hardware security includes numerous safety features and components. That includes true random number generation (TRNG), secure boot mechanisms, secure update, secure debug, cryptographic acceleration, and isolation of sensitive and critical functions with security subsystems. Then there are tamper resistance and protection of secrets, tamper detection, on-the-fly memory encryption, process/functions isolation, and run-time integrity protection.

Increasingly, security IP subsystems are integrated in SoCs. But this approach isn’t viable for all types of applications, and even when they are, there is such a wide range of SoCs (and MCUs) out there, all with different configurations, that integrated security (vs. deployment of discrete security elements) won’t happen overnight.

None of this is to say that software doesn’t matter when it comes to security. For the foreseeable future, it will play a complementary role, working hand in hand with hardware security measures.

Another article in the series addresses an area that’s directly relevant to consumers who are increasingly embracing smart technology in their homes: security systems, thermostats, slow corkers, baby monitors…

In her piece, Sally Ward-Foxton discusses Hardware RoT, and asks the question “can we trust AI in safety critical systems?”

The answer is apparently yes, but not 100%.

That’s because AI is a black-box solution.

While neural networks are designed for specific applications, the training process produces millions or billions of parameters without us having much understanding of exactly which parameter means what.

For safety critical applications, are we comfortable with that level of not knowing how they work?

CoreAVI specializes in systems – defense, aerospace, automotive, industrial – that are higher-end than what we have in our homes. CoreAVI’s Neil Stroud suggests that there are ways that can render AI inference “deterministic enough” for their systems and, by inference, the applications found in smart homes. The burden is in part on those creating the AI algorithms/models.

Another technique is to test run a particular AI inference many times to find the worst-case execution time, then allow that much time when the inference is run in deployment. This helps AI become a repeatable, predictable component of a safety critical system. If an inference runs longer than the worst-case time, this would be handled by system-level mitigations, such as watchdogs, that catch long-running parts of the program and take the necessary action—just like for any non-AI powered parts of the program, Stroud said.

Still, the black box nature of AI makes certifying “safety critical” systems somewhat problematic.

There’s also an article in there series, “Security Proliferation Vexes IoT Supply Chain” by Barbara Jorgensen that’s worth a read.

Connectivity and Low Power in the IoT Era

We are well into the Internet of Things Era, and have reached the point where we (almost) take it for granted that so many objects that are part of everyday life – even objects we don’t traditionally consider to be all that technologically-based – include embedded systems that connect them to the world. EETimes has been running articles that address some of the challenges that designers are grappling with when they’re creating IoT apps.

Given that connectivity is the most fundamental attribute of the IoT, it’s not surprising that one of the key challenge arenas lies here.

In her article, How to Select Wireless SoCs for Your IoT Designs, Gina Roos notes that choosing the right SoC isn’t that simple. It’s not merely a matter of picking one that supports the right wireless protocols that can meet range, latency and throughput requirements. There are other factors that designers need to think about, tradeoffs that need to be made: battery life, compute and memory resources, footprint.

First off, the application dictates what wireless protocol to use. Said Dhiraj Sogani of Silicon Labs:

“Every wireless protocol is playing a different role, and the end-application use cases are the most important in deciding one or more of these protocols for an IoT device.”…For wireless protocols, requirements include application throughput, latency, number of network nodes and range, he said. “IoT devices are becoming more complicated every day as more functionality is getting integrated into the devices. Adding wireless to the IoT devices increases the complexity manifold. There are many wireless protocols being used in IoT devices, including Wi-Fi, BT, BLE, Zigbee, Thread, Z-Wave and cellular. The choice of wireless communication protocols for a particular device depends upon the application, size, cost, power and several other factors.”

A wearable needs a different wireless protocol than a drone; a home security monitor has different requirements than a payment processing system. Form, in the form of wireless protocol, will definitely follow function.

There’s also the challenge of whether to use an integrated wireless SoC vs. separating out the wireless from the processor. Using an integrated SoC means a smaller footprint, and a smaller product. But it also means less flexibility when it comes to optimizing compute performance. It all depends. And, again, it comes down to the application. More complex applications may benefit from a discrete solution.

Other challenges that surround the choice of a wireless solution include RF circuitry, hardware longevity, and the software tools and environment (which will, of course, differ, depending on the OS selected). Lots to keep in mind!

And, of course, one of the things to keep in mind is power consumption. In his article, Energy Harvesting Circuit Enables Ultra-Low Power Apps, Maurizio Di Paolo Emilio takes on the topic from the lower end of the power consumption spectrum.

From the 10,000 foot perspective, he notes that the quest for “maximum efficiency” is common throughout the electronics sector:

Energy harvesting techniques can not only minimize—if not eliminate—maintenance interventions, but it can also reduce current consumption and minimization of power losses, enabling more efficient use of energy resources and the fulfillment of requirements imposed by recent global regulations.

But for ultra-low power apps, energy efficiency is closely tied to device feasibility, and much of this applies to the tens of billions of IoT devices in use.

In many cases, IoT devices are powered by rechargeable batteries that users must replace periodically.

By capturing and converting different forms of energy available in the environment, such as solar or wind power, it is possible to obtain the electrical energy required to recharge IoT device batteries. Other forms of usable energy are radio frequency (RF) signals, mechanical or kinetic energy, solar power, and thermal energy.

After briefly reviewing the most common energy harvesting sources, Emilio gets into energy harvesting circuit design and the need for “solutions that can make the most of the available energy, minimize losses and maximize efficiency.”

A typical energy harvesting scheme is composed of an ambient energy source, energy transducer, power management unit, energy storage element, voltage regulator and electrical load.

He then describes in detail (complete with block diagrams) some scenarios: harvesting RF and thermal energy, and solar radiation harvesting.

Overall, the eetimes series on issues regarding embedded design in the IoT era has been a pretty engrossing read. In my next couple of posts, I’ll be summarizing their articles on edge processing and device security. So stay tuned!

Embedded in the IoT Era

Since the start of the year, eetimes has been running a series of articles on the challenges that designers must consider when they’re creating embedded systems, which these days are ubiquitous. A they point out, “embedded systems underpin practically every device and system today.” They place those challenges in three categories they consider most critical: connectivity and lower-power design; edge processing; and device security. With my next few posts, I’ll be summarizing what eetimes has to say about each of these areas. With this post, I’ll tee up the series.

In his piece, Majeed Ahmad focuses on the issues that designers are dealing with as they combine embedded processors with analog circuitry – a combination that is enabling a broad range of applications.

Embedded applications are, of course, nothing new, but, as Majeed notes, new challenges are always popping up— “especially when embedded processors designed to work with analog have also evolved over the years to facilitate greater battery efficiency and lower power consumption.”

Drawing on the work of Synaptics’ Dave Gillespie, he describes two analog-embedded challenges.

The first one presented: how to deal with the fact that silicon technology between digital and analog is diverging as manufacturers, aiming for higher performance and lower supply voltages, run up against the constraint that “analog front-end (AFE) circuits often use higher I/O voltages for better dynamic range.”

“Ever smaller feature sizes make digital logic faster and cheaper, but at the same time, they are less and less appropriate for analog circuits,” Gillespie said. “Consequently, sometimes a package that looks like a chip from the outside actually has multiple silicon dice inside.”

There is a solution in which the package combines one part that’s produced through a digital-friendly process – making use of capabilities like digital calibration, which can make up for any aspects of analog processing that are less than ideal – and another circuit that’s built through a process that’s more analog-oriented.

The question then becomes: what goes into the system-on-chip (e.g., signal level, bandwidth) and what more appropriately stays external.

It’s tricky. Mixed-signal and advanced analog designs have been built expressly, from the ground up, to optimize digital performance, and are better suited for some applications rather than others. The industry is coming up with workarounds for this. (One thing about technology: it does have the ability to respond and adapt as needed!)

Beyond the challenge of accommodating analog, there’s the question of deciding on where operations are best performed: special-purpose processors vs analog circuits.

“You need a lot of analog expertise to call this decision properly,” [Gillespie] said. “To do an efficient SoC today, you need deep skill with a wide variety of processing cores, not just with a CPU and its bus architecture.”

And then there’s our old friend DSP to factor in:

“Moreover, at high frequencies, it’s also often necessary to pre-process signals with a dedicated DSP or video codec core right next to the data conversion stage.”

There are a number of other considerations, some specific to the sensor-rich applications that are increasingly on the scene.

Lots going on in the world of embedded processing, much of it thanks to the spectacular growth of IoT devices. There’s no exact measure for the number of IoT devices that are out there. I’ve seen estimates for 2023 of 7 billion IoT devices, and others that place the figure at nearly double that amount. (Just extrapolating from a data point of one – the IoT devices I own – I’d put my money on the higher number.) Growth rate estimates for the next few years also vary. But the commonality among them is that growth is high, and not slowing down anytime soon.

Looking forward to going a little deeper on connectivity and low power, edge computing, and security.

Three Strikes You’re Out

With the major league baseball season now underway, I was interested to read that all of baseball’s Triple-A teams – that’s the highest-level league before the majors – will be using an electronic strike zone. (It was used in some games last season, and has been in operation in a number of lower-level leagues since 2019. The pandemic has somewhat delayed implementation.)

The Automatic Balls and Strikes system, commonly referred to as ABS, will be deployed in two different ways. Half of the Class AAA games will be played with all of the calls determined by an electronic strike zone, and the other half will be played with an ABS challenge system similar to that used in professional tennis.

Each team will be allowed three challenges per game, with teams retaining challenges in cases when they are proved correct. [Major League Baseball’s] intention is to use the data and feedback from both systems, over the full slate of games, to inform future choices. (Source: ESPN)

ABS is composed of these four subsystems:

  • The MLB tracking system
  • An interface the ballpark operator uses to set the correct batter
  • An MLB server that receives the tracking data and has the ball-strike evaluation code
  • A low-latency communication system to relay calls to the umpire

And here’s how it works:

The primary inputs for ABS are the pitch arc — the path the ball takes as it proceeds from the pitcher to the catcher — and the location and dimensions of the strike zone. The pitch data generated by the MLB tracking system is composed of a set of polynomials describing the path the center of the ball travels through space in MLB’s standard coordinate system, where the y-axis points toward the pitcher’s mound from the back of home plate, the z-axis points directly up from the back of home plate, and the x-axis is orthogonal to the other two axes.

As soon as the pitch data has been generated by the tracking system, ABS uses that information and the current strike zone definition to determine whether the pitch is a strike. The strike zone definition has varied across different iterations of the ABS system, but as a general rule, ABS constructs a two-dimensional shape on the x/z plane at a specific depth and checks whether the ball intersects that shape. The ball is modeled as a circle centered at a point along the pitch arc with a radius of 1.45 inches as defined in Rule 3.01 of the Major League Rulebook. The result of the ball/strike evaluation is then relayed to the umpire. (Source: MLB Technology Blog)

The results so far have proved to be very accurate and timely.

By the way, the umpire will not be standing around waiting for the result to get there. (If you follow baseball at all, you’ll be aware that speeding up the game is a big issue, so MLB will be doing nothing to slow things down.) From the broadcast graphics of the strike zone that are used in most MLB broadcasts, they’ve learned that those graphics are faster with their updates than the umpire is making their call. This provides a window of opportunity for ABS to inform the umpire of the call, letting them know whether to use their right hand (strike) or left hand (ball).

It’s also important to note that the umpire does retain some autonomy, and will utilize their own judgement when matters that aren’t limited to the strike zone occur: checked swings, catcher interference. Also, if there’s too much latency, the umpire will go ahead and call the pitch. (Again: no one wants to delay the game!)

There’s a lot more detail in the MLB Technology Blog, if you’re interested.

Meanwhile, in reading about automation coming to baseball umpiring, I came across an article about an attempt at automated umpiring that happened in March 1950 at the spring training facility for the Dodgers – at that time, the Brooklyn Dodgers.

The so-called “cross-eyed electronic umpire” introduced that day used mirrors, lenses and photoelectric cells beneath home plate that would, after detecting a strike through three slots around the plate, emit electric impulses that illuminated what The Brooklyn Eagle called a “saucy red eye” in a nearby cabinet.

Popular Science declared, “Here’s an umpire even a Dodger can’t talk back to.”…

“It was definitely a novel use of existing technology, using a photoelectric cell to size the strike zone,” he[Mike Jakob, the president of Sportvision] said. “But,” he added, “it couldn’t be used for night games.” (Source: New York Times)

Branch Rickey, president and part-owner of the Dodgers, was a visionary, but he wasn’t envisioning automated umpiring. He was planning on using the machine as a teaching tool to help pitchers learn how to perfect their bunting technique. (That 1950 machine, incidentally, was created by a General Electric team located in Syracuse. Team leader Richard Shea “specialized in semiconductors, did the initial circuit design work for the company’s first transistors and also worked at Knolls Atomic Power Laboratory.” Even without the invention of the automated umpire, that’s some resume!

Anyway, if Major League Baseball in 2023 is not quite ready to automate baseball umpiring, they sure weren’t in 1950!

Meanwhile, Syracuse has a AAA club –  The Mets – so I may actually see ABS in person this summer. It’ll be interesting to see how it changes the dynamics of the game. Play ball!

Superconductivity

Superconductivity has long been the Holy Grail of materials science. Superconductive materials have the ability to convey electricity with next to no resistance. This decreases energy loss, making energy conveyance a lot more efficient – which has implications for pretty much any technology, as long as it uses electric energy. Which pretty much covers a wide range of technology, from personal devices like smartphones, to medical applications, to transportation, to the power grid.

Source: Science Alert

Unfortunately, superconductivity so far has only occurred at ultracold temperatures (or under extreme pressure), which means it has had limited application when it came to everyday usage. You’re usually not using your smartphone, undergoing an MRI, taking the train, or operating the power grid when it’s hundreds of degrees below zero. In other words, cold that’s a lot colder than the normal, run of the mill frigid winter weather we regularly experience in the Syracuse, NY environs.

Then, a few weeks ago, came an announcement of a superconductor developed at the University of Rochester that could deliver on the superconductivity promise.

The new superconductor consists of lutetium, a rare earth metal, and hydrogen with a little bit of nitrogen mixed in. It needs to be compressed to a pressure of 145,000 pounds per square inch before it gains its superconducting prowess. That is about 10 times the pressure that is exerted at the bottom of the ocean’s deepest trenches.

The team at Rochester started with a small, thin foil of lutetium, a silvery white metal that is among the rarest of rare earth elements, and pressed it between two interlocking diamonds. A gas of 99 percent hydrogen and 1 percent nitrogen was then pumped into the tiny chamber and squeezed to high pressures. The sample was heated overnight at 150 degrees Fahrenheit, and after 24 hours, the pressure was released.

About one-third of the time, the process produced the desired result: a small vibrant blue crystal. (Source: NY Times)

Of course, your smartphone doesn’t operate at “the bottom of the ocean’s deepest trenches.” But this result represents a radical improvement over the results found earlier by the same Rochester team, which is led by Professor Ranga Dias. Those results, reported in 2020, were achieved at pressure “akin to the crushing forces found several thousand miles deep within the Earth.”

And as for the latest results only being produced one-third of the time, the power grid can’t rely on a process that only works that infrequently. Nor can your smartphone, for that matter.

Still, the Rochester team appears to be on a promising track for further research, suggesting that a superconductor that “works at ambient room temperatures and at the usual atmospheric pressure of 14.7 pounds per square inch” may well be within reach.

That’s the good news.

The not so good news is a cloud of controversy hovering over this latest finding, a controversy stemming from the 2020 report by the same team, which has since been retracted. Scientists are weighing in on this issue, from those who discount the controversy to those skeptics who cast full doubt on the latest findings. Most of those responding seem to be in the this is a great result, IF category.

“If this is real, it’s a really important breakthrough,” said Paul C.W. Chu, a professor of physics at the University of Houston who also was not involved with the research.

However, the “if” part of that sentiment swirls around Dr. Dias, who has been dogged by doubts and criticism, and even accusations by a few scientists that he has fabricated some of his data. The results of the 2020 Nature paper have yet to be reproduced by other research groups, and critics say that Dr. Dias has been slow to let others examine his data or perform independent analyses of his superconductors.

The decision to retract the 2020 paper was made by Nature editors, not by Dias and his co-authors, who objected to the retraction. That earlier paper has since been revised and is currently under review.

In any case, other researchers may be able to reproduce the experiment, which is viable because “the new lutetium-based material is superconducting at much lower pressures”, thus easier to replicate. And Professor Dias says that he would like to offer information on making the compound, and to share samples. However, there are some intellectual property issues standing in the way. Dias “has founded a company, Unearthly Materials, that plans to turn the research into profits.”

Researchers involved in this discovery understandably want to be compensated for their work, but that will rest on whether this latest attempt at superconductivity turns out to be the real deal. Hopefully, it is, as this could be a real game-changer when it comes to conveying electricity.

———————————————————————————————————–

Other articles on this topic you might be interested in: “Room Temperature Superconductivity Claimed,” published by Spectrum IEEE, and “Controversy Surrounds Blockbuster Superconductivity Claim” which appeared in Scientific American.

AI isn’t quite human-like. Yet. But it’s getting closer

There’s been plenty of buzz in the last month or so about AI chatbots. And there’s no doubt about it, AI is making chatbots more human-like. Still, it looks like we still have a bit of time before the human race is completely replaced by machines.

Out of the gate in early February was Google with Bard, its AI chatbot. Unfortunately, in its first demo, Bard gave an answer that was quickly shown to be wrong.

To err is human, I suppose. So in making a factual mistake, Bard might have been passing the Turing Test of sorts. (The Turing Test evaluates whether a machine can give responses that are indistinguishable from those a human would provide.)

The question Bard flubbed was a claim that the James Webb Space Telescope (JWST) “took the very first pictures of a planet outside of our own solar system.” In fact, those first pictures had been taken nearly a decade before the JWST launched.

…a major problem for AI chatbots like ChatGPT and Bard is their tendency to confidently state incorrect information as fact. The systems frequently “hallucinate” — that is, make up information — because they are essentially autocomplete systems.

Rather than querying a database of proven facts to answer questions, they are trained on huge corpora of text and analyze patterns to determine which word follows the next in any given sentence. In other words, they are probabilistic, not deterministic. (Source: The Verge)

And if you think this little factual error doesn’t matter, Google’s stock price dropped 8% the following day.

Perhaps hoping to upstage their friends at Google, the next day Microsoft began introducing a new version of Bing, their search engine. Bing is a very small player in search. The most recent numbers I saw gave Bing a market share of about 3% vs. Google’s 93%. I’m sure they’re hoping that a Bing that’s really smart will close that gap. The new Bing incorporates a customized version of chat that’s running on OpenAI’s large language model ChatGPT. The new Bing promises to provide complex responses to questions – replete with footnotes – as well as to assist creative types with their poetry, stories, and songs. It’s been made available to a limited number of previewers, and there’s a long waitlist of those hoping to get a go at it.

Unfortunately, the new Bing went a big rogue.

…people who tried it out this past week found that the tool, built on the popular ChatGPT system, could quickly veer into some strange territory. It showed signs of defensiveness over its name with a Washington Post reporter and told a New York Times columnist that it wanted to break up his marriage. It also claimed an Associated Press reporter was “being compared to Hitler because you are one of the most evil and worst people in history.”

Microsoft officials earlier this week blamed the behavior on “very long chat sessions” that tended to “confuse” the AI system. By trying to reflect the tone of its questioners, the chatbot sometimes responded in “a style we didn’t intend,” they noted. Those glitches prompted the company to announce late Friday that it started limiting Bing chats to five questions and replies per session with a total of 50 in a day. At the end of each session, the person must click a “broom” icon to refocus the AI system and get a “fresh start.” (Source: Washington Post)

Again, I guess you could say that getting confused and lashing out are actually very human traits. Still, if the expectation is that AI chatbots will be factual, relevant, and polite, it appears that they aren’t yet ready for primetime.

Not to be outdone, in late February, Meta released LLaMA, an AI language generator.

LLaMA isn’t like ChatGPT or Bing; it’s not a system that anyone can talk to. Rather, it’s a research tool that Meta says it’s sharing in the hope of “democratizing access in this important, fast-changing field.” In other words: to help experts tease out the problems of AI language models, from bias and toxicity to their tendency to simply make up information. (Source: The Verge)

Of course, Meta had its own AI chatbot fiasco in November with Galactica. Unlike Bing and Bard, which are general purpose, Galactica’s large language model was supposedly expertly built for science.

A fundamental problem with Galactica is that it is not able to distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text. People found that it made up fake papers (sometimes attributing them to real authors), and generated wiki articles about the history of bears in space as readily as ones about protein complexes and the speed of light. It’s easy to spot fiction when it involves space bears, but harder with a subject users may not know much about.  (Source: Technology Review)

One thing to insult a newspaper reporter; quite another to make up scientific papers.

Looks like us humans are safe for a while. For now.

Technology Breakthroughs, 2023

I recently picked up the current issue (Jan-Feb) of the MIT Technology Review, which focused on ten breakthrough technologies that they’re keeping an eye on for 2023. Overall, an interesting list. Here are the technologies that caught my attention.

Organs on demand There were several medical-related technologies, but the one I found most intriguing was the development of engineered organs on demand. And there’s a lot of demand out there. It’s estimated that there’s a waiting list for organ transplants in the US with 100,000 names on it. In 2022, nearly 7,000 people died while waiting for a transplant. Pig-to-human organ transplants have been the subject of research for decades, but it’s now getting closer to becoming a reality.

Several biotech companies are focusing on editing the DNA of pigs, removing molecules that aren’t as compatible with humans as they are with pigs, and tweaking the pigs by adding genes that make the animals more human like. So far, there’s only been one transplanted pig organ: a 2022 heart transplant in which the patient survived two months. (The man was willing to try this approach because, for a number of reasons, he was ineligible for a human heart transplant.) Formal trials are expected to get underway in 2024.

More exciting to me are the engineered solutions.

Researchers are in the early stages of exploring how to engineer complex tissues from the ground up. Some are 3D-printing scaffolds in the shape of lungs. Others are cultivating blob-like “organoids” from stem cells to imitate specific organs. In the long term, researchers hope to grow custom organs in factories.

This is a way off, but it will make more organs available to help take care of all those on long waiting lists.

Given the business I’m in, I was naturally drawn to the headline A chip design that changes everything. What’s going to change everything is the adoption of the RISC-V open standard, which the Tech Review believes will topple the “power dynamics” that have dictated that chip makers producing off-the-shelf chips licensed the designs they use from a couple of sources (Intel and ARM). The RISC-V design is an open standard, which eliminates any licensing charge.

RISC-V specifies design norms for a computer chip’s instruction set. The instruction set describes the basic operations that a chip can do to change the values its transistors represent—for example, how to add two numbers. RISC-V’s simplest design has just 47 instructions. But RISC-V also offers other design norms for companies seeking chips with more complex capabilities.

Remember when the signs outside McDonald’s used to advertise how many billions of hamburgers the chain had sold? Well, RISC-V chips are gaining traction, and “10 billion cores [have] already shipped.” They’re being used in a range of applications: “earbuds, hard drives, and AI processors.” In the not too distant future, they’ll be showing up in “data centers and spacecraft.” Stay tuned!

The James Webb Space Telescope (JWST) was launched in late 2021. JWST is 100x more powerful than the Hubble Telescope. With a main mirror that measures 21 feet across (triple the width of the Hubble), giving it far greater resolution capability.

Every day, JWST can collect more than 50 gigabytes of data, compared with just one or two gigabytes for Hubble. The data, which contains images and spectroscopic signatures (essentially light broken apart into its elements), is fed through an algorithm…[which] turns the telescope’s raw images and numbers into useful information…

It is specifically designed to detect infrared radiation, allowing it to cut through dust and look back in time to a period when the universe’s first stars and galaxies formed.

JWST is up there, orbiting 1.5 million kilometers from earth. Scientists are using the data it’s sending back to figure out just what happened after the Big Bang. JWST’s mission sounds a lot like that of Star Trek’s Starship Enterprise: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. May JWST “live long and prosper.”

Supertalls!

When the (still) tallest building in my hometown of Syracuse went up in the late 1920’s, it was considered a skyscraper. All 21 floors of it. A couple of years later, when the Empire State Building was erected Downstate, the definition of a skyscraper was stretched a lot further. But most buildings in the 20th century weren’t as tall as the Empire State, and, although many were taller, what constituted a skyscraper settled in at around 40 or 50 stories.

That was then, and this is now. And the skyscrapers going up are supertalls that dwarf the Empire State Building.

After 9/11, and the destruction of the Twin Towers, the consensus among engineers and architects was that the Age of the Skyscraper had ended.

It hadn’t.

In fact, in the post-9/11 decades, buildings have gotten even taller.

There are skyscrapers, and then there are supertalls, often defined as buildings more than 300 meters in height, but better known as the cloud-puncturing sci-fi towers that look like digital renderings, even when you’re staring at them from the sidewalk. First supertalls were impossible, then a rarity. Now they’re all over the place. In 2019 alone, developers added more supertalls than had existed prior to the year 2000; there are now a couple hundred worldwide, including Dubai’s 163-story Burj Khalifa (a hypodermic needle aimed at space), Tianjin’s 97-floor CTF Finance Centre (reminiscent of a drill bit boring the clouds), and, encroaching on my sky, Manhattan’s 84-floor Steinway Tower…

Some supertalls have an even more futuristic designation: superslim. These buildings are alternately described as “needle towers” or “toothpick skyscrapers” (though not every superslim is a supertall). Early superslims shot up in Hong Kong in the 1970s, though lately they’ve become synonymous with New York City; four supertall superslims loom over the southern end of Central Park in a stretch of Midtown dubbed “Billionaires’ Row.” (Source: The Atlantic)

Supertalls are feats of engineering.

A 2021 article in the journal Civil Engineering and Architecture declared: “There is no doubt that super-tall, slender buildings are the most technologically advanced constructions in the world.”

There are any number of potential problems the supertalls have, including how they contend with winds. Even if the structure isn’t going to get blown over, occupants have experienced “water sloshing in toilet bowls, chandeliers swapping, and panes of glass fluttering… Occupants of tall buildings have, in high winds, reported nausea, distractibility, difficulty working, and fatigue, though researchers report that skyscrapers “rarely, if ever, induce vomiting.””

They can make a lot of noise, too.

Sounds like bad, bad, bad vibrations. (Above a certain height, all building’s sway a bit. Just not so you’d notice.)

Supertalls – and the supertallers that in the works (including one in Jeddah, Saudi Arabia, that will be one kilometer high) – have been made possible by:

…new combinations of materials like microsilica and fly ash (a residue that results from burning coal) have made concrete steroidally strong…and steel has gotten sturdier too, all of which has helped spur the supertall boom. Advances in elevator technology—such as ultra-strong, lightweight cables and algorithms that efficiently consolidate passengers—have also helped buildings stretch.

RWDI is a Canadian engineering consultancy that has a specialty practice in modeling and analysis around building performance. They’ve worked on many of the supertalls out there. For one in NYC, they created a:

3-D-printed a knee-high model of the building, and stuck it into a miniature Midtown Manhattan, complete with dozens of neighboring high-rises that can affect the windscape at 432 Park’s site. They put the model buildings on a turntable inside a wind tunnel, then subjected them to smoke and powerful fans. RWDI adjusted the wind tunnel’s settings to mimic Manhattan’s gusts and rotated the tiny neighborhood in 10-degree increments to get a baseline measurement of how the proposed supertall would sway, absorb winds careening off other structures, and shift the wind around it—all of which remains too complex to accurately predict with algorithms.

When the prototype was tested, it was shown to sway at a level just below where people in the building could lose their balance if they were standing there (inside, mind you) when the wind started blowing. Not much of a selling feature.

The design for this supertall was tweaked – including the addition of two “blow through” floors: empty space with no residents, no windows –  but before the builders went ahead and built it, they went to the Marine Institute of Newfoundland to see what they hoped would be their final design would feel like.

They used the Marine Institute’s simulator, which is deployed for ship’s crews to get accustomed to dealing with waves, icebergs and the likes, to experience what “tenants will feel on a windy day, during a strong gale, or during a once-a-century hurricane. At 432 Park, the blow-through floors alone wouldn’t settle the building, so the developers ultimately installed two tuned mass dampers—a pair of 600-ton counterweights between the 86th and 89th floors that can move 11 feet, to offset the supertall’s sway.”

At least the designers are hoping it will solve the swaying problem. Each supertall is a one-off – “essentially a prototype” – so you can’t do the same sorts of testing done for things that will be produced at scale.

It all sounds pretty perilous to me, so I’m glad there are engineering experts putting their minds to making it all work. Still, while it may not be much, I’ll take Syracuse’s tallest building any old day.

What’s Caught My Eye When Reading About CES 2023

Although I don’t attend CES, the mega consumer tech show held each January in Las Vegas, I’m always curious to read about the products/gadgets that are on display there.

Year after year, it seems to be more of the same, with many products/gadgets that are next gen versions of what has been around for years. (Not that everything that gets shown at CES is actually around. Lots of items are previewed there, but aren’t commercially available until years later.)

So, what’s the story on the latest and greatest at CES 2023?

Bigger and better TVs. Cars that do more than they did last year. Improved laptops. Gaming enhancements – VR, AR. Foldable, rollable screens. Smarter everything, especially, for some reason, smarter (and wildly expensive) toilets.

I looked through a number of different articles on Wired, The Verge, CNET, etc. and it was all beginning to blur.

But, amid the blur, a few things piqued my interest.

The Nowatch stood out because, as its name implies, it’s a smart watch/fitness tracker that doesn’t tell time. (I guess they figure that, if you need to know the time that badly, you can look at your phone. Or just ask someone.) Anyway, the Nowatch has all the usual suspects: sensors that measure heart rate, steps, and sleep. And, while it’s at it, your sweat glands, too. Measuring sweat is a relative newcomer to the health metrics family, but it’s helpful when it comes to monitoring stress levels – and providing an early warning system with suggestions for de-stressing.

If I were working in a warehouse or factory, or on a construction site, I’d be looking for more than a smart non-watch telling me when to destress. I’d be more than interested in the German Bionic Apogee, an exosuit that “supports a worker’s lower back and provides a boost of power as they lift heavy objects, unload trucks, or perform repetitive tasks on a production line.” (Source: Wired) Sounds good to me.

The Shiftall Mutalk looks like something Hannibal Lecter might have worn. The Mutalk is a Bluetooth Microphone that’s used during a VR session, muting your voice, enabling you to participate even if you’re in close quarters with a lot of other people and want to keep the peace. Presumably you can also use it during a Zoom meeting if you’re in an open office setting and don’t want to disturb your colleagues, or working from home and don’t want to disturb your spouse and/or your pets.

I’m not quite there yet, but I’m admittedly not getting any younger, so may someday be in the market for the Eargo 7, and over-the-counter hearing-aid. The Eargo is “self-fitting” – you don’t need a doctor or audiologist to help set it up – and almost invisible, and the batteries are self-charging. I’m not going to go so far as to say something to look forward to, but if and when, the Eargo’s on my list to check out.

In December, I wrote that all I wanted for Christmas was a flying bicycle, but if someone gave me an Aska A5 flying car, I wouldn’t say no. (It would have to be a gift, as the price tag is roughly $800K.) FAA approval is pending and expected shortly, and “Aska hopes to use the A5 to start a ride-sharing service in 2026.” (Source: CNET)

Closer to reality, there’s the eKinekt BD 3 bike desk that makes the treadmill desk look so yesterday. The bike desk doesn’t just let you get some exercise in while working, the pedaling you’re doing actually uses kinetic energy to power your computer. I might want to have the computer that I was powering an Asus “glasses free” 3D laptop that pops an image right off the screen and into your face. The 3D experience is made possible through the use of eye-tracking tech.

Finally, if I can’t have a flying bicycle or a flying car, I’m intrigued by Rollkers, “a gadget you strap to your feet to double your walking speed.” (Source: Wired) Rollkers motorize your feet, and using them is very much like walking on an airport motorized walkway. You’re walking, but the Rollkers speed things up. This concept has been kicking around at CES since 2015, but the company is hoping that they’ll get to market within the next couple of years.

So that’s what caught my eye when reading about CES 2023.

Tech Trends for 2023

One of my favorite things to do when the new year is starting is to roam around looking at what the tech prognosticators are prognosticating for the coming year. There’s no end to the prediction articles that are out there, but I managed to narrow things down. Mostly, I settled on reading those that approached technology trends from the business perspective, i.e., what business leaders need to be on the lookout for. I find this perspective plenty interesting, as this is the arena where what we all work on has its impact (and, at the same time, it’s ultimately business that impacts the direction that we all go in).

The “business perspectives on tech trends” articles I saw pretty much all landed in the same place when they looked ahead at 2023, starting with AI on through the metaverse, robotics, and sustainability. The one I found best expressed the general sense of what the next year holds was  Bernard Marr’s predictions from Forbes.

This isn’t the first time that someone’s predicting ubiquitous AI. I bet if I went back a few years, I’d find it on plenty of prediction lists. In Marr’s view, 2023 is the year during which AI “will become real,” with businesses deploying it in more and more of its products and services. He focused specifically on retail, where AI is used everywhere from making recommendations to consumers and managing complex inventory systems.

Marr also sees the metaverse (“a more immersive internet”) becoming more real, with AR and VR advancements, including avatar technology, that aren’t just for gamers, but will be used in platforms that businesses will use for richer meeting, training, onboarding, and collaboration environments. (Note to self: pick a great avatar.)

Forget the wilder things happening out there in crypto world. Blockchain/Web3 technology has the potential to make data more secure. Marr also sees NFTs “become more usable and practical,” e.g., an NFT concert ticket providing virtual backstage access.

With digital twin technology and 3D printing, the digital and physical worlds will be bridged. “Digital twins are virtual simulations of real-world processes, operations, or products,” a lab for safely and cost-effectively testing new product ideas. Once tested, products can be produced in the real world via 3D printing.

Nature will be increasingly editable. New materials, created using nanotechnology, will have features like “water resistance and self-healing capabilities.” Gene editing (CRISPR-Cas9) will accelerate, enabling scientists “to correct DNA mutations, solve the problem of food allergies, increase the health of crops, or even edit human characteristics like eye and hair color.” I don’t know how useful that latter use case is, but if they can edit hair onto a bald head, well, now we’re talking.

Just as we’ve been talking AI for years, so, too with quantum computing. In Marr’s view, the race is on “to develop quantum computing at scale,” bringing with it the potential for “computers capable of operating a trillion times more quickly than the fastest traditional processors available today.” A trillion times? Pretty unfathomable. The downside? The encryption methods in use today will be useless.

When it comes to green technology, look for progress to be made. Green hydrogen. A green pipeline from North Sea wind plants. Decentralization of the power grids. Marr sees these all happening, helping decrease carbon emissions.

Add more human-like robots to the list of technologies (AI, quantum computing) that have been on trend lists for quite a while. We’re getting used to the notion of robots in warehouses and factories. How about as “event greeters, bartenders, concierges, and companions for older adults.” O, brave new world!

And let’s not forget about autonomous systems. Maybe the self-driving cars haven’t happened as rapidly as some had predicted, but factories and warehouses are becoming increasingly autonomous. (AI + robots = autonomous technology, and the robots don’t have to look anywhere near as human as a bartender.)

Sustainable technology is also on Marr’s drawing board for 2023. Most of us rely on technology throughout our personal and professional lives, generally without thinking about “where rare earth components for things like computer chips originate and how we’re consuming them,” or worrying about the energy-hogging data centers used by cloud services (Netflix, Spotify). Marr believes that in 2023, consumers will become more conscious of the need for more sustainable technology, and begin pushing businesses to do more about it.

I didn’t want to completely ignore the tech perspective on tech trends, so I’ll include a couple of observations from Brian Dipert’s 2023 forecasts from EDN.

On the semiconductor supply front, Dipert notes that while things are greatly improved over where they stood last year at this time, he foresees inconsistency. When it comes to high volume, commodity semiconductors, he sees oversupply.

Specifically, I’m talking about semiconductor memory, both volatile (DRAM) and nonvolatile (NAND flash memory). Prices are crashing, along with profits, translating into customer delight and supplier angst.

On the other hand, when it comes to semiconductors that are less commodified and produced at lower volumes, Dipert believes that any full restoration of supply stability will be more gradual, and with hiccups like last year’s “automobile manufacturing lines shut down due to IC non-availability.”

A couple of things likely to improve the supply situation: the faltering crypto market means less compute-intensive bitcoin mining, freeing up demand. And the CHIPS and Science Act, while it won’t have an impact in 2023, will in the longer run help with domestic supply.

Like those on the business side expressing concerns over sustainability, Dipert sees that environmental issues will come more to the fore in 2023. He notes that chip manufacture is highly demanding when it comes to consumption of water and electricity. Progress toward achieving more use of green energy sources is likely to remain slow, but awareness of the issues should eventually translate into more action.

Personally, I don’t make predictions, other than to predict that 2023 will bring with it plenty of interesting and technically challenging work for the engineers at Critical Link.