Skip to main content

The Future of Embedded Processing Technology

My last two posts have been about some high-level tech trends that Accenture had identified. In this post, I take a look at some trends that are closer to home, with my summary of (and take on) a piece by TI’s Sameer Wasson “3 trends impacting the future of embedded processing technology.”

While there’s plenty of overlap between the Accenture trends and what Sameer sees – for one non-surprising example, there’s AI – his focus is more specifically on the role of embedded tech.

Embedded technology has been transformative, changing the way we all live: at home, at work, at play, and changing the way the world around us operates. And there’s more to come. To realize the promise of technology, Sameer sees that embedded technology will need to evolve. It will need to become more energy-efficient, making electronic devices more environmentally sustainable. They will need to do a better job integrating technology with the human element. They’ll need to keep up with an exponentially-increasing amount of data – not just capturing that data, but analyzing and acting on it. And, overall, embedded technology has to promote greater accessibility, ease of use, and affordability, making all the wonders that technology brings with it more universally available.

As Sameer sees things – based on his observations of the industry, and conversations with TI cu

stomers, “three trends stand out as crucial capabilities that a comprehensive embedded processing portfolio should deliver.”

The first trend he identifies is more integrated sensing capabilities. These capabilities will capture more, better data, that will be used for smart cities in application areas like building energy systems that both save money and reduce waste, and emergency response systems to promote public safety. More integrated sensing will also come to the fore in smart homes, improving energy efficiency and security, and enabling myriad personal uses (entertainment, day to day chores, etc.) As Sameer observes, “the value of embedded intelligence working with real-time data is only starting to be understood and realized.”

As promised, another trend is enabling artificial intelligence in every embedded system. TI finds that “adding intelligence to every system is becoming a norm.” Automation is ubiquitous – make that smart automation – means that engineers are adding capabilities “that must process ever-increasing amounts of data, make intelligent decisions and respond quickly.”

Intelligence at the edge will be at the heart of smart, automated systems. This means that data is computed close to the sensor where it’s captured, leading to more secure, faster and reliable processing.

Intelligence at the edge means eliminating the latency involved when data is sent to the cloud to be analyzed.

In this section, Sameer points to TI’s AM6XA vision processor family, which is also the device at the heart of Critical Link’s latest SOM family, the MitySOM-AM62A System on Module. The MitySOM-AM62A offers customers all the features of TI’s AM62A processor, with the added benefit of faster time to market, lo

ng-term product support, and reduced supply chain headaches.

The final trend that Sameer includes is ease of use so designers can get to market faster. Since Critical Link was founded, this has been our mantra.

Embedded processing portfolios and the ecosystems that support them must be easy to design with, along with support that is robust enough to help customers with all their embedded needs, reducing time to market and allowing designers to spend more time innovating.

Sameer concludes by writing “I’m excited about the future of embedded processing technology…and seeing how our customers will unlock new ways of creating a better world.”

So am I!

Taking a look at Accenture’s Technology Vision 2023 – Part 2 of 2

In my last post, I took at look at the first two trends (digital identity, big/bigger data) discussed in Accenture’s Technology Vision 2023 report. This time around, I’ll see what else they had to say about the technology trends that they see being the most impactful to the convergence of the physical and digital worlds and, hence, to the enterprises Accenture works with.

Not surprisingly, AI is on their list, in this case as the trend toward Generalizing AI.

AI has been around for a good long while, but it was the 2020 release of OpenAI’s GPT-3, the largest language model to date, that really began to turn heads. What GPT-3 did was show off breakthrough capabilities, “teaching itself to perform tasks it had never been trained on, and outperforming models that were trained on those tasks.” All of a sudden, a model didn’t have to be created to perform a specific task within its data modality (e.g., text, images). We’re now heading into multimodal model territory “which are trained on multiple types of data (like text, image, video, or sound) and to identify the relationships between them,” and have hundreds of millions, even trillions, of parameters. Game changer! We’re still not replacing humans quite yet, but Accenture cites one “generalist agent” that can perform and seamlessly switch between more than 600 tasks, including chatting, captioning imaging, playing video games, and operating a robotic arm.

Generalizing AI is made possible thanks to a couple of important innovations. Transformer models:

…are neural networks that identify and track relationships in sequential data (like the words in a sentence), to learn how they depend on and influence each other. They are typically trained via self-supervised learning, which for a large language model could mean pouring through billions of blocks of text, hiding words from itself, guessing what they are based on surrounding context, and repeating until it can predict those words with high accuracy. This technique works well for other types of sequential data too: some multimodal text-to-image generators work by predicting clusters of pixels based on their surroundings.

Scale is the second innovation here. Increased computing power enables transformer models to vastly increase the number of parameters that can be incorporated in the model. (Trillions, anyone?) This yields both greater accuracy and enables the models to learn new tasks.

The Accenture report culminates in an exploration of what they term “the big bang of computing and science,” a feedback loop between technology and science where breakthroughs in one domain spur breakthroughs in the other – all occurring at hyper speed.

In this section, Accenture describes how science and technology are pushing the envelope in several different industries. In materials and energy, supercomputers operating at exascale will enable chemists to perform molecular simulations with greater accuracy, coming up with new materials to tackle problems such as climate change. As we push up against the inevitable limits of even the most powerful supercomputers, the shift to quantum computing in the chemistry field will step in.

New rocket and satellite technologies are enabling scientists to conduct more experiments in space, where the ability to work in the unique conditions of space are “accelerating what we can learn about fluid physics, diseases, materials, climate change, and more, to improve life on Earth.” A decrease in the costs of components and an increase in the involvement of the private sector mean that the once-prohibitive costs of experimentation in space are coming down. There’s even a startup offering “digital lab space.”

In biology, the computing-science “big bang” has brough about “an entirely new field: synthetic biology…[which] combines engineering principles with biology to create new organisms or enhancing existing ones.” This has implications for any number of life’s necessity: food, drugs, fuels. The cost of DNA sequencing and synthesis are having a Moore’s Law moment, cutting in half every two years. (I didn’t check the arithmetic – I’ll trust Accenture here! – but in 2001, sequencing the human genome was $100 million. Today, it’s about $600.

The Accenture report is totally free. (You don’t even have to sign up to access it.) Always interesting to see what intelligent observers have to say about what’s happening in the world of science and technology.

Taking a look at Accenture’s Technology Vision 2023 – Part 1 of 2

We’re three-quarters of the way into 2023, but I just came across Accenture’s Technology Vision 2023 report. It’s pretty high level, definitely on the vision side, and geared towards Accenture’s enterprise clients, but I thought it might be interesting to give it a look.

In Accenture’s view, we’ve got a “step change” coming, in which our physical and digital lives will “seamlessly converge.” They see several technology trends that are pushing us towards this convergence. In this post, I’ll focus on their first two trends.

First up: Digital identity – “ID for everyone and everything” where there will be a more centralized approach to identity than having to have a separate ID for every app and platform. An example they give is Apple Wallet, which has gone beyond being a payment system to “let users store and share government-issued IDs like driver’s license. Wallet, of course, is already used as a central repository for digital items like boarding passes and tickets to shows and sporting events.

One major area where Accenture sees digital ID and the convergence of the physical and digital realms is healthcare. Currently, health-related data exists across a range of disparate platforms and hosts, “forcing patients to repeatedly validate who they are.” Having a trusted digital identity will remove a lot of friction from the system. Emerging technologies that will play a big role here include biometrics and tokenization. New technologies will help remedy the security problem inherent in the current approach to digital identity, where “The entire ecosystem has become overly reliant on leveraging functional data, like email addresses or social logins, to access services or identify people across the web,” methods that are susceptible to hacking.

Big – really big – data is the next area Accenture tackles. Here’s a forecast to consider: In 2020, 47 zettabytes of data were created. It’s predicted that, by 2035, each year will see the creation of 2,142 zettabytes of data. Talk about overwhelming! Organizations will need a robust data architecture to handle all the data they have access to, not to mention a better ability to ensure that the data is accurate and secure. Then, of course, there’s the challenge of actually gleaning insights and acting on them.

Remember that Accenture is focused on the convergence of the physical and digital worlds. Here’s where data comes in. Sure, much of the data that makes up big data is based on our direct interactions with the Internet – that and someone entering information on us. But sensing technology is making an increasing contribution to the array of data available, capturing data about the physical world at an accelerating clip. Some of this is coming from personal devices, like a smart watch that monitors, say, our heart rate. Other comes from industrial systems and the environment, where “as the cost of sensors drops and their capabilities grow, we are entering a time with extremely precise and expansive information about our bodies, environments, and world.”

All this accelerated data gathering would not, of course, be possible without communications technology. Today, “we are seeing the ability to transmit data grow dramatically—over long distances, to places previously unconnected, and in nearly real-time.” Today, we’re talking about 5G (and satellite) communications technology. 6G is in the works, with commercial availability currently expected by 2030. Transmission rates just keep on getting faster and faster, bringing us bigger and bigger data to grapple with.

As I said at the outset, the Accenture report is pretty high level, but interesting nonetheless. In my next post, I’ll bring you the other trends they discuss.

 

Tech in the Classroom

A few weeks ago, I saw a piece in the news about a school system that’s issuing students IDs to check in and off their school buses. Via use of radio frequency, the school can tell which kids are on the bus, where the bus is located, and where the kids get off. Eventually, parents will be able to monitor their kids’ bus rides in real time. I’m sure that plenty of folks are already keeping tabs on their bus-riders using smartphones, but not all kids have smartphones, especially the little ones. I’m sure that if this technology had been around when my kids were in school, we would have been happy to have access to this tracking info when our kids were small.

The story got me thinking about what other technology is being used in schools.

The most obvious is, of course, virtually learning, which came to the fore during covid.

No argument here that the online meeting technology has made great strides over the years. (Does anyone else remember a few decades back, when visuals were added to what had been phone-based meetings? The latency could be jarring: quaking voices and jerky movements.) Even though the meeting tech has vastly improved, and there are many features – raised hands, breakout rooms, pop quizzes, project collaboration – that make online learning more engaging, I’m still a fan of classroom learning when it comes to kids.

Within the classroom, technology has many uses. Here are a few that I came across in a blog from Go Guardian, a digital learning vendor.

Today’s students are digital natives, with no memory of a world without smartphones, tablets, and ubiquitous apps. I’ve seen toddlers, barely able to walk or talk, occupying themselves with their folks’ phones. So gamified learning is a natural, and something that kids will find engaging. And who knows? Maybe playing a spelling game is more effective than writing those spelling words out ten times.

I love the idea of digital field trips. Nothing wrong with giving the kids an occasional break from the classroom routine with a field trip to a museum, a park, a concert, an historic attraction. But not every school is within school-bus distance of interesting attractions. And even if there are plenty of things to do in your immediate area, a digital field trip can bring students to Yellowstone Park, the Eiffel Tower, the Great Wall of China. A virtual trip to Paul Revere’s House and Concord Bridge could augment a lesson on the American Revolution.

A variation on this theme is incorporating video/multimedia lessons and presentations within the curriculum. Kids learn in different ways, and using “visual effects, photos, videos, and music” in lessons will let the teacher reach and engage with students with different learning styles. Having “guest speakers” present is another idea here. (No replacement, IMHO, for local folks – parents, grandparents, et al. – coming into the classroom and interacting with the students!)

Many of today’s kids – maybe even most of them – will graduate into a world where digital skills are required. So having students create digital content that’s related to what their learning gives them a chance to show off their creativity and hone their communications skills.

Some kids are faster learners than others, so I really like the idea of online activities for students who finish work early. That sure beats twiddling your thumbs, staring out the window, or pretending you’re still working.

I’m less enamored of the idea of integrating social media into the classroom, but I suppose learning to do social media right can help students become more aware of the pitfalls and discerning about misinformation.

Anyway, my kids are no longer in school, but as a tech guy, I like the idea of incorporating technology into the learning experience. After all, technology plays an increasingly important part in pretty much all aspects of what we do. That said, I still hope there’s plenty of offline time for quiet activities and for the kids to interact with their teachers and each other the old-fashioned way!

Car Talk

Some of today’s most compelling technological developments are happening in the automotive realm, which works for me, given my interest in both technology and cars. I’ve posted about this intersection a number of times, most recently in a July post about how buying a car loaded with chips is a bit like buying a laptop.

And once again, I thought I’d do some car talk here, with a roundup of a few recent articles I’ve read on the Big Three automotive tech categories: autonomous vehicles; connected cars; and EV’s. So here goes.

Autonomous Vehicles

In August, the city of San Francisco okayed the expansion of driverless cabs. A few days later, after one of those driverless taxis got in a collision with a fire truck, the number of cabs from GM’s Cruise (one of the companies deploying the autonomous taxis) on the road was cut in half. But the dustup with the fire truck was just one of the problems that have occurred when there’s no cab driver in the cab. (There have been dozens of other incidents related to autonomous cab interference with emergency vehicles reported since January.)

Earlier in the same week, another Cruise cab somehow got mired in freshly poured concrete.

Then there was this unfortunate situation, which occurred when:

…about 10 Cruise vehicles stopped functioning in the middle of a busy street in San Francisco’s North Beach neighborhood, blocking traffic for 15 minutes. Drew Pusateri, a spokesman for Cruise, said in a statement that the cars had difficulty connecting to the Cruise employees who might have guided them out of the way because of a spike in cellular traffic caused by a music festival in the city’s Golden Gate Park about four miles away. (Source:  NY Times)

Yet again, it seems like autonomous vehicles are not quite ready for prime time.

Nonetheless, San Francisco is now piloting a program with driverless (EV) buses.

The free shuttle will run daily in a fixed route called the Loop around Treasure Island, the site of a former U.S. Navy base in the middle of San Francisco Bay. The Loop makes seven stops, connecting residential neighborhoods with stores and community centers. About 2,000 people live on the island. (Source: CBS News)

This is quite a bit different than driverless taxis, which can go anywhere. There are set stops and a known route. Plus, although the buses don’t have anyone in the driver’s seat – in fact, there is no driver’s seat – they do have an attendant who can step in and operate the bus using a handheld controller. (There’s no steering wheel in the bus.)

Even if they’re not 100% there yet, driverless vehicles sure look like the future to me.

Connected Cars

In my recent post, I wrote about all the technology that’s embedded in cars. And that’s a growing number of cars.

Connected vehicles are, according to the World Economic Forum, forecast to double by 2030, accounting for 96% of all shipped vehicles. (Source: EE Times)

With all these connected – and software-driven – cars, “vehicle tampering will become a real problem,” raising a variety of safety concerns. To meet the security challenge, there’s a new cybersecurity ISO standard specifically designed for vehicles that are on the road.

EETimes says that neural networks will help designers and manufacturers address these standards.

Neural networks aid in various aspects of data privacy and protection by generating and managing cryptographic keys used in encryption algorithms. By training on a dataset of secure keys, neural networks can learn to generate robust and unpredictable keys, enhancing the security of data encryption. In addition to enhancing encryption processes, neural networks also contribute to improving anomaly detection, key management and intrusion prevention.

The EETimes piece goes into some detail on how neural networks work. Well worth the read. And good to know that there’ll be technology to stand up to the security threats that connected vehicles present.

Electric Vehicles

While connected vehicles are growing in number, so are electric vehicles. The problem is finding charging stations. WiTricity, a Massachusetts company which produces wireless charging systems, is hoping to help meet the increased demand for charging systems with an alternative approach to plugging in.

Like Wi-Fi, which delivers internet data without wires, WiTricity uses magnetic fields rather than cables to give batteries a boost. Millions of people already recharge their smartphones this way, by placing them on a charging pad. Now, several Asian carmakers are using the WiTricity system to let drivers recharge their electric vehicles the same way: Just park the car directly above a charging pad at night, and forget about it…

The system uses a charging coil that emits a magnetic field which operates at a precise frequency. A nearby receiver coil is tuned to resonate at the same frequency and convert the incoming energy into electric current. Early versions were quite inefficient, transmitting only about 40 percent of the energy fed into the system. But these days, WiTricity claims it can transmit power to a receiver with over 90 percent efficiency. That’s about the same as you’d get from a plug-in car charger. And because the magnetic field only feeds power to the resonating coil, the charger has no effect on humans or nearby objects. (Source: Boston Globe)

The upside is that this approach is more convenient than having to plug your car in, and can all be operated through a smartphone or dashboard app. The downside is that so far, a wireless charging pad is more expensive than a physical plug-in charging station, and so far most American EV makers haven’t added wireless charging to their feature set. (Early adopters are largely in the Asian market.)

As EV’s are more widely adopted in the US, we should expect to see more developments in the charging (and battery) arena.

And that concludes today’s episode of Car Talk.


A tip of the hat to Car Talk, which ran for years on NPR, and to the car-talking Magliozzi brothers, Tom and Ray.

Agricultural Technology

Us city-slickers – and suburb-slickers – likely don’t spend a lot of time thinking about new technology as it applies to agriculture. Yet few industries are more pervasive, or more critical to our wellbeing. Emerging technology in this arena is helping tackle myriad environmental, economic, and food security issues as the world population increases and places greater demands on the agricultural industry to operate in a more productive and efficient manner.

In a recent article on embedded.com, Silicon Lab’s Chad Steider outlined a number of use cases where wireless technology is deployed to make agriculture smarter. Sensors set out in fields gather environmental data to help determine optimal crop rotation and yields, and can be used to identify diseases and infestations before they can destroy an entire crop. On the inside, sensors can also be used to monitor real-time conditions in greenhouses and adjust lighting, temperature, and humidity as needed. Beyond planted crops, sensors can help farmers and ranchers track their herds and monitor animal health.

Overall:

While IoT and smart sensor technologies are a goldmine for highly relevant, real-time data, the use of data analytics helps farmers make sense of it and come up with important predictions around crop harvesting time, the risks of diseases and infestations, yield volume, and more. While farming is inherently highly dependent on weather conditions, data analytics tools help make farming more manageable and predictable. (Source: embedded.com)

Steider also provides a discussion of Wi-SUN Technology, “a leading open-source, open-standard protocol for smart city and smart utility applications and is primed for smart agriculture as well.” This section makes for an interesting read.

His article prompted me to poke around and look for additional news on innovation in the agri-tech sector, which led me to a piece from MassChallenge (an organization focused on tech innovation and entrepreneurship) that was published earlier this year. Here’s some of the technologies they see on the horizon.

First up: Bee Vectoring Technologies. Pollination is critical to the production of many crops, including blueberries, apples, and sunflowers.

BVT uses commercially reared bees to deliver targeted crop controls through pollination, replacing chemical pesticides with an environmentally safe crop protection system.

The system doesn’t require spraying water or the use of tractors. Instead, the scientifically designed bumblebee hive allows bees to pick up a trace amount of pest control powders on their legs to spread as they travel within the field.

Then there’s Precision Agriculture (which was also mentioned by Chad Steider in his embedded.com piece.) Precision Agriculture focuses on using big data captured by remote sensors to help farmers make better crop and animal management decisions that can be carried out using drones, robots, and other automation approaches.

For indoor vertical agriculture, hydroponics and aeroponics enable indoor farming where farmers can grow crops on stacked up shelves.

There has been a great deal of concern over the use of genetically modified food, and whether it’s harmful to human health and to the environment. In the near future, minichromosomes will be able to “enhance a plant’s traits without altering the genes in any way.”

In short, minichromosome technology allows genetic engineers to create crops that require fewer pesticides, fungicides, and fertilizers, reducing reliance on harmful chemicals. It also lets them achieve bio-fortification and enhance a plant’s nutritional content.

Laser scarecrows may well be my favorite innovation on the MassChallenge list. While it was always fun when, as kids, we spotted a scarecrow in a farmer’s field, stuffing near-rag clothing with straw may not have been all that effective.

After discovering that birds are sensitive to the color green, a researcher from the University of Rhode Island helped design a laser scarecrow, which projects green laser light. The light isn’t visible by humans in sunlight, but it can shoot 600 feet across a field to startle birds before destroying crops.

Early tests with laser scarecrows found that the devices can minimize crop damages by reducing the bird population around farmlands by up to 70% to 90%.

Interesting! I don’t recall ever seeing an old-school scarecrow wearing green.

While us city- and suburb-slickers may not think about agricultural technology all that often, it’s good that others are doing it. That said, it’s late summer, and in Central New York, where I live, there is plenty of farming. That means we’ll be able to enjoy native corn and tomatoes, and, in another couple weeks, our local apples. Thankfully, there’s technology that will enable us to continue to enjoy this bounty!

A Different White Coat Effect

There’s been no escaping the news about the drastic heat being experienced throughout the Southwest and the South this summer. And if you live in those regions, there’s been no escaping heat that’s been well above 100 degrees for days, even weeks, at a time. We don’t have heat like that in Central New York. At least not yet. We do typically have a number of 90+ days, and that number has been creeping up.

The last few summers have been hotter than average in Central New York with 15, 18, and 21 90+ degree days in the summer of 2022, 2021 and 2020, respectively. Remember, the average number of 90+ degree days in Syracuse each year is 10 over the last 30+ years. (Source: LocalSYR.com)

But 100+ days remain a rarity. At least for now.

I can only imagine how difficult it must be to live and work in that kind of heat.

As the earth continues to heat up, most scientists predict that there’ll be more and more of what would have once been considered weather anomalies – the new normal will be abnormal.

The good news is that there are many technologists focusing on ways to remediate the effects of global warming.

One such technologist is a Purdue University professor of mechanical engineering Xiulin Ruan, who has been working for years to develop a white paint that can cool buildings down without the side effects produced by a massive use traditional air conditioning.

In 2020, Dr. Ruan and his team unveiled their creation: a type of white paint that can act as a reflector, bouncing 95 percent of the sun’s rays away from the Earth’s surface, up through the atmosphere and into deep space. A few months later, they announced an even more potent formulation that increased sunlight reflection to 98 percent.

The paint’s properties are almost superheroic. It can make surfaces as much as eight degrees Fahrenheit cooler than ambient air temperatures at midday, and up to 19 degrees cooler at night, reducing temperatures inside buildings and decreasing air-conditioning needs by as much as 40 percent. It is cool to the touch, even under a blazing sun, Dr. Ruan said. Unlike air-conditioners, the paint doesn’t need any energy to work, and it doesn’t warm the outside air. (Source: NY Times)

The paint was developed with rooftops in mind, but there are other applications in the making. E.g., a “more lightweight version that could reflect heat from vehicles.”

I’m always delighted to see my fellow engineers getting in on the act of improving life here on earth. One is Jeremy Munday, who is a professor of electrical and computer engineering at UCal-Davis.

He calculated that if materials such as Purdue’s ultra-white paint were to coat between 1 percent and 2 percent of the Earth’s surface, slightly more than half the size of the Sahara, the planet would no longer absorb more heat than it was emitting, and global temperatures would stop rising.

This would not, of course, be as simple as white-topping half the Sahara Desert. It would have to be spread across many settings in many regions, with checks in place to make sure that weather disruptions and other unintended consequences aren’t introduced.

The Purdue white paint, which should be available commercially next year sometime, is not a panacea. The challenge is a complex one, with many intertwined economic, political, geopolitical, environmental, and cultural implications. And it’s not going to be solved overnight.

Nonetheless:

Large-scale radiative cooling, Dr. Munday said, would be akin to a life raft.

“This is definitely not a long-term solution to the climate problem,” Dr. Munday said. “This is something you can do short term to mitigate worse problems while trying to get everything under control.”

Meanwhile, one of the unintended consequences of Xiulin Ruan’s work was that, in 2021, Guinness named the Purdue ultra-white page “the whitest paint ever.”

Guinness Book of World Records aside, it’s good to see that solutions are being developed that will help “get everything under control” and give Mother Earth some more breathing room. So I’m all for the Purdue paint, hoping that it produces a new and different white coat effect.

Technology taking off in the travel and hospitality industry

With the vacation season at its peak, it’s time to take a look at the technology that’s becoming more and more prevalent in the travel and hospitality industry.

These days, it seems quaint to see someone getting on a plane with an old-school paper boarding pass. And many restaurants have a QR code on the table that lets you access the menu (and, in some places, even order your meal.)

The hotel industry is also investing heavily in technology.

Travelers of a certain age may remember when you were handed a key on a large fob, making it pretty difficult to lose. (In Europe, the keys were sometimes kept on a longish wooden handle – too long to comfortably carry them around with you, so you left it at the lobby desk when you left the hotel and retrieved it when you needed to get back in your room.)

Most hotels have long since replaced mechanical keys with electronic cards that you insert, swipe, or tap.

Those smart cards are starting to look as old fashioned as the key on the fob, however, as digital room keys are increasingly being deployed, using bar or QR codes. These are similar to the digital tickets we’ve been using for a while for sports events, concerts, and the theater, but are relatively new to the hotel scene. (Since you didn’t keep your hotel key anyway, it’s no great loss to go digital, but, for those who liked to keep ticket stubs as souvenirs, the digital route is less than welcome. I suppose folks can print out an image of their barcode, or save it as some sort of personal NFT…)

Digital keys are just part of the growing use of technology in hotels.

The reservation process has long been online, either through a hotel or chain website, or through a general-purpose booking or travel aggregator, but since covid “contactless check in” via phone app or self-service kiosks are now available in many hotels.

And with so many people abandoning any use of cash, tipping apps are appearing so that travelers can tip “service workers like valets, housekeepers, bartenders, and bell staff.

Digital tipping apps [are now in use that] allow guests to leave tips via credit cards or other digital payment methods. By scanning a QR code or clicking a link, guests can access the platform, and the platform handles tipping employees out. (Source: Hotel Tech Report)

Not to mention all sorts of in-the-room tech that Amy Tara Koch, a NY Times travel writer, recently encountered.

Koch’s encounter with a tech-ified Swiss hotel was not all that pleasant. When she went to turn in, she found that a TV disguised as a bathroom mirror had been turned on as part of the turndown service. She couldn’t figure out how to turn it off. Koch also couldn’t find a phone in the room and figured it was just as direct to walk down to the reception desk and see if she could find someone to turn the “smart mirror” off. Unfortunately, she didn’t realize she was also going to need assistance in turning the bathroom lights off. All part of “the glut of smart technology in hotel rooms.”

Some of that technology is devoted to improving the guest experience.

Neha Jaitpal, the global general manager for Honeywell’s Building Technologies sector, oversees “intuitive” solutions for more than 2 million hotel rooms worldwide, working for companies like Accor and Fairmont Hotels & Resorts. “Imagine arriving at your hotel room after a long day of travel, and it’s already adjusted to your preferred settings — from the temperature, lighting and even the position of the drapes,” she said. “Through automation, guest rooms can be personalized without the need for human interaction.” (Source: NY Times)

Smartphones are being used to play music, book a spa session, order room service, request extra towels. Pretty much any hotel service you can think of.

And that is improving the guest experience. For some guests, anyway.

Koch is not one of them.

These “guest enhancements,” touted as in-demand by hoteliers and the tech companies that make them, are not in demand by me.

And Koch isn’t alone.

“I used to walk into a hotel room and relax. Now it is a job to figure out how to use the lights and switch off the television, which, of course, is set to the hotel’s promotional station,” said Jill Weinberg, 67, a regional director with the U.S. Holocaust Memorial Museum, and like me, a frustrated hotel guest. “Here is an entirely new system to waste mental energy upon every single time I travel.”

Some hotels are actively resisting the embrace of technology. Beyond making Wi-Fi available, and maybe using a smart room key, travelers turn on the lights using a switch; turn on the TV using a remote; and pull open the drapes the old-fashioned way.

All very interesting, but as anyone who’s been watching emerging technology for years knows, there’s something inexorable about the technification of our day-to-day experiences.

Safe Travels!

The travel season is upon us, and those who will be flying somewhere are probably gearing up for the security hassle they’ll find at the TSA checkpoint.

There’s getting your ID out, and (maybe) your boarding pass – old-school paper or on your phone. (Remind me again where I need to show it.) Then there’s juggling your identification with your carry-on luggage. And, if you’re traveling with your family, hanging on to your kids.

Then there’s your stuff.

Laptop out? Laptop open? Tablet? Kindle? Phone?

Did you remember to put all your liquids in a 1 quart baggie? Are all those liquids smaller than 3.4 ounces? Is toothpaste a liquid? Is peanut butter?

It goes without saying that we all want to be safe when we fly, but at times contending with the protocols can be an irritant when all you want to do is make it to your gate on time.

Fortunately, the TSA has been introducing technology that will expedite the process. The new tech will help on two fronts: ID checking to make sure that you’re the same person who’s ticketed on the flight; and making screening your carry-ons more convenient.

In many airports throughout the country, the TSA uses Credential Authentication Technology (CAT) to help travelers speed up their journey through security. There are two different types of CAT machines in use. With CAT-1 machines, which were introduced in 2019 and are now in use in 200 American airports, you don’t need to show your boarding pass to get through, just some form of approved ID (driver’s license, passport).

CAT-1…scans and analyzes a passenger’s photo ID and then automatically confirms their flight details through the Secure Flight database along with their pre-screening status (like if they have TSA PreCheck) in real time. (Source: Travel & Leisure)

While you do still need your boarding pass to get on the plane, the CAT-1 machines mean that there’s one less piece of paper (or phone app) to worry about at security.

Better yet are the CAT-2 machines, which rely on facial recognition technology. So far, these are in test mode, and are only deployed in 16 airports, with plans to increase the numbers to 28.

…unlike the first generation of machines [the Cat-1’s], the CAT-2 units are also equipped with a camera that can compare the photo ID with a real-time photo of the passenger. They are also equipped with a reader capable of scanning a state-issued digital driver’s license or digital identification card, according to the TSA.

Using biometric technology automates the process, replacing the human element that required the TSA agent to examine an ID and then look at the traveler’s face.

Travelers who don’t want to go through security via the facial recognition technology can opt out, but be assured that the photos taken are overwritten by the picture taken of the next traveler, and memory is entirely purged when the agent logs off.

Even more welcome, as far as I’m concerned, is the introduction of new Computed Tomography x-ray machines. These new machines take 3D images of your carry on. They’re a new and improved way for agents to identify “weapons, explosives, and other banned items.” For those of us who don’t typically carry these sorts of items, this technology is also what will let us keep our electronics and mini-shampoo bottles in our carry-on bags.

So far, unless you’re over 75 years of age or have TSA PreCheck, you’ll still need to take your shoes off to pass through security. Hopefully, technology to take care of that bit of hassle will be available soon as well.

Safe travels!

Edge Intelligence in the IoT Era

Since the first of the year, EE Times has been running a series on design challenges in the IoT era. While providing a general overview of these challenges, they’ve focused on a few more specific areas: connectivity, security, and edge intelligence.

For our final entry in our “series on the series,” I’ll be summarizing Nitin Dahad’s piece, “Software Portability is Key Driver for Embedded IoT.”

But first a bit of a primer on what we mean by edge intelligence.

Basically, edge intelligence means that data is analyzed and solutions generated as close as possible to the point where the data itself is generated, rather than in the cloud. This helps eliminate latency, as the data doesn’t have to be transported anywhere which, in turn, can reduce security risks.

STL Partners is a UK consultancy that has a list of edge computing use cases on its website. One they cite is the use of autonomous vehicles in truck convoys. STL sees edge computing removing “the need for drivers in all trucks except the front one, because the trucks will be able to communicate with each other with ultra-low latency.” (Maybe not so good for the truckers, but good for cost reduction and efficiency in the trucking industry.)

Another is in-hospital patient monitoring, where edge computing will greatly reduce the possibility that confidential patient data will be hacked. Cloud gaming and streaming services, where latency is so critical, also present strong use cases for computing on the edge.

As edge intelligence becomes more prevalent, Nitin Dahad writes that “hardware compute architectures are becoming more complex, and, in turn, software development is becoming more challenging.” He suggests that software portability via the use of containers is a key approach to overcoming any challenges on the edge.

…containers can wrap up a program along with all of its dependencies into a single, isolated executable environment. In fact, containers have also been described as lightweight virtual machines.

Google Cloud adds that containers make it easy to share CPU, memory, storage and network resources at the operating-system (OS) level and offer a logical packaging mechanism that allows applications to be abstracted from the environment in which they actually run.

The benefits of using containers – “a clear separation of responsibility;” the ability to run virtually anywhere: “on virtual machines, physical servers and a developer’s machine;” and their ability to enable application isolation “at the OS level, providing developers with a view of the OS logically isolated from other applications” – offer portability, allowing “an application to run independently of the host environment, resulting in consistent execution across a wide range of environments.”

Microservices that break up an application “into a collection of small autonomous services” is another way to ensure software portability. (Here, Dahad goes into some detail on how Luos – “an open-source lightweight containerization platform” – works.)

This is followed by a section on software-defined products.

Software-defined products and services are enabled by the combination of hardware programmability and the ability to add or change functionality with over-the-air (OTA) updates. Virtualization and abstraction of workloads from the underlying hardware can enable more flexible and agile hardware platforms and delivery of software-defined or software-enabled services.

In software-defined products, functions become more independent from their hardware specification, enabling a broader feature set and faster evolution, as the functions are much easier to upgrade. By definition, the main product functions are software-driven and portable, able to take advantage of new hardware and easy to move to different hardware variations.

This section was especially interesting to me in that it discusses the emergence of software-defined vehicles. As a car – and tech – enthusiast, I found this compelling reading.

But overall, I found the entire EE Times series compelling reading. As the IoT grows into the Internet of Everything, we’ll have all the more reason to make sure that, as developers, we’re up to the connectivity, security, and edge computing challenges that abound.