Skip to main content

Jack Ganssle’s “How embedded projects run into trouble”: Reasons 2 and 1

For the last couple of months, my bi-weekly posts have been devoted to Jack Ganssle’s excellent series on the top reasons why embedded projects run into trouble that’s been running on embedded.com. As an engineer who’s been involved in building embedded applications throughout my career, I found myself nodding my head in agreement each time I read one of Jack’s articles.  From his number 10 reason – not enough resources – down through his number 1 – unrealistic schedules – I’ve recognized a number of situations that I’ve personally been involved in. But it’s also been gratifying to recognize that, here at Critical Link, we put a good deal of focus on avoiding problems like the ones Jack has on his list.

With that, here’s my summary of Jack’s Top Two.

2 – Quality gets lip service

Some of Jack’s Top Ten – maybe even all of them – have issues around quality at their core. Here he addresses it directly.

Jack points out that American cars used to have a terrible reputation for quality. Then the Japanese began entering the market with cars that were both higher in quality and lower in price than American vehicles. Then American car manufacturers paid a few visits to Japan and discovered what the Japanese had learned from American quality gurus like W. Edwards Deming: “that the pursuit of quality always leads to lower prices.” These days, quality is no longer a problem for American car makers:

Today you can go to pretty much any manufacturing facility and see the walls covered with graphs and statistics. Quality is the goal, and it is demanded, measured, and practiced.

Alas, Jack finds that, in firmware labs, there tends to be less of a focus on quality. He wants developers to spend more time on test, and not let it wait until the end of a project. Because we all know that, if schedules get tight, something’s got to give and if testing is on the tail end of the schedule, well…

But test is not enough. Quality code happens when we view development as a series of filters designed to, first, not inject errors, and second, to find them when they do appear.

We’re not a firmware lab, per se, but the point about testing and best efforts to “not inject errors” is well taken.

I’m very proud that, at Critical Link, we take quality very seriously across the board. (Sorry for the pun. I couldn’t resist.) I’d be happy to talk to anyone about our quality focus, but to take a shortcut, I’ll note that we’re ISO 9001:2015 certified.

1 – Unrealistic Schedules

Jack’s #1 in this series of articles is one he characterizes as “a fail that is so common it could be called The Universal Law of Disaster:” unrealistic schedules. In his discussion of this problem, he catalogs the reasons why those impossible schedules are something that we universally (and perpetually) have to grapple with as developers. He talks about top-down pressures to hit aggressive dates in order to secure a project.

These pressures aside, he doesn’t let the engineers doing the estimating off the hook:

Too often we do a poor job coming up with an estimate. Estimation is difficult and time consuming. We’re not taught how to do it… so our approach is haphazard. Eliciting requirements is hard and boring so gets only token efforts.

Jumping into coding too quickly is #9 in Jack’s top 10. As I noted in my commentary on that reason, “at Critical Link, our experience as systems engineers means we place tremendous importance on planning and being careful about our requirements.” I’ve found that, if you put the time in on the front end to very thoughtfully figure out what the exact requirements are, you’re rewarded with a much more realistic schedule.

There are, of course, invariably changes to a project. That’s just the nature of the development beast.

There will always be scope modifications. The worst thing an engineer can say when confronted with a new feature request is “sure, I can do it.” The second worst thing is “no.”

“Sure, I can do it” means there will be a schedule impact, and without working with the boss to understand and manage that impact we’re doing the company a disservice. The change might be vitally important. But managing it is also crucial.

“No” ensures the product won’t meet customer needs or expectations. At the outset of a project no one knows every aspect of what the system will have to do.

By the way, the answer to allowing some requirements creep without impacting the schedule is NOT to “throw more resources at it,” which Jack sees as “usually an exercise in futility.” Best to a) do solid planning and careful scheduling from the get-go, and b) keep everyone who needs to be in the loop when the schedule looks like it might slip.

Anyway, that’s it for the Top Ten Reasons Embedded Projects Run into Trouble. Good things to keep in mind, even for us old-timers! Hope that you enjoyed reading this series of post as much as I enjoyed writing them. I think the list is pretty complete, but if you have other problem areas you think should have made the list, please let me know. I’d love to hear from you.

 

Here are the links to my prior posts summarizing Jack’s Reasons 10 and 9, Reasons 7 and 8, and Reasons 6 and 5, and Reasons 4 and 3.

Jack Ganssle’s “How embedded projects run into trouble”: Reasons 4 and 3

For the last month or so, Embedded.com has been running an excellent series on the Top Ten reasons embedded projects run into trouble, according to Jack Ganssle, an embedded systems guru. I’ve been having a good time counting down with Jack. As I’m writing this summary of his third and fourth reasons, reasons 1 and 2 have not yet been revealed. But stay tuned. I’ll be getting to them in another two weeks.

Anyway, here goes reasons 4 and 3.

4 – Writing optimistic code

Jack starts off this piece in a very humorous way: with a store receipt for a couple of small sailboat parts that showed a charge of $84 Trillion.

This is an example of optimistic programming. Assuming everything will go well, that there’s no chance for an error.

I don’t know if the appearance of an obvious error like this is an example of optimistic programming. But it’s sure an example of poor testing.  But maybe they’re one and the same thing. Anyway, Jack points out that, given that “the resources in a $0.50 MCU dwarf those of the $10m Univac of yore… We can afford to write code that pessimistically checks for impossible conditions.” Which we need to do given that “when it’s impossible for some condition to occur, it probably will.”  Let’s call this point Ganssle’s Law:  the coding equivalent of Murphy’s Law.

Giving advice specific to embedded programming, he suggests that almost every switch statement should have a default case And that assert macros should be universally used:

Done right, the assertion will fire close to the source of the bug, making it easy to figure out the problem. Admittedly, assert does stupid things for an embedded system, but it’s trivial to rewrite it to be more sensible.

Jack’s bottom line:

The best engineers are utter pessimists. They always do worst-case design.

Considering myself a pretty good engineer and a not especially pessimistic sort of guy, I hadn’t thought of it this way. But I’m definitely down with doing worst-case design.

3 – Poor resource planning

Embedded systems developers face challenges that software developers don’t have to deal with in a world of cheap memory, cheap storage, etc.  And that’s resource constraints:

With firmware, your resource estimates have got to be reasonably accurate from nearly day one of the project. At a very early point we’re choosing a CPU, clock rate, memory sizes and peripherals. Those get codified into circuit boards long before the code has been written. Get them wrong and expensive changes ensue.

But it’s worse than that. Pick an 8-bitter, but then max out its capabilities, what then? Changing to a processor with perhaps a very different architecture and you could be adding months of work. That initial goal of $10 for the cost of goods might swell a lot, perhaps even enough to make the product unmarketable.

This ties back to an earlier reason that Jack listed for why embedded projects can go awry. His Number 9: jumping into coding too quickly. The answer to that one was to pay a lot of careful attention to requirements. Same thing applies here, but with the problem that you’re typically subject to financial constraints on what firmware to pick. That and the fact that you also need to avoid built in obsolescence. So:

A rule of thumb suggests, when budgets allow, doubling initial resource needs. Extra memory. Plenty of CPU cycles. Have some extra I/O at least to ease debugging.

Next post, we’ll wrap up with the reasons that get Jack’s nod for first and second place.

Here are the links to my prior posts summarizing Jack’s Reasons 10 and 9, Reasons 7 and 8, and Reasons 6 and 5.

Jack Ganssle’s “How embedded projects run into trouble”: Reasons 6 and 5

Embedded.com has been running an excellent series of articles by Jack Ganssle on his top ten reasons why embedded projects may find themselves in trouble. I’ve been summarizing Jack’s articles here for my past couple of items. Here’s my take on Reasons 10 and 9, and on Reasons 7 and 8.

It’s not surprising that there are so many ways in which even small, simple projects can go off the rails and some of those reasons are universal.  But embedded development is so complex that projects have pitfalls that are both particular to these types of projects, and are common for projects of all types.  In today’s post, we’ll address one of each.

6 – Crummy analog/digital interfacing

First there’s the good news:  thanks to advancements like multi-layer (and cheaper) PCBs and a mind-boggling array of ICs, analog design has gotten easier over the years. The not so good news:

On the other hand, the more resolution we can get, the more we need so other issues become paramount, like noise…Noise can be fought by many means, but I see too many teams blithely throwing some sort of filter at it. A simple bit of code can average some number of samples, or implement IIR/FIR filters. These are great solutions to some problems, and disasters for others. Any sort of filter will distort the data to some extent and many will delay the result, which can be a problem for fast real-time systems. It’s often better to use careful analog design to minimize the noise before it hits an ADC.

Jack provides a couple of design tips, e.g., using guard traces, optoisolators, or ferrites depending on the circumstances. He ends with a warning that we should put careful analysis into our designs and not forget the basic electronics/circuit theory we studied way back when we were in college. As Jack says, “Electronics is a lot of fun.”

And now for the more universal problem that Jack identifies.

5 – Weak managers or team leads

Jack starts out with Management 101:

Managers are supposed to manage people. That means a lot of things: protect the team from frivolous distractions, make sure they have the needed resources, cull poor performers, and much more.

He then addresses the importance of following firmware standards (and why managers need to enforce the standards and provide the means to ensure conformance, like code reviews or automated tools).

Jack goes on to briefly discuss Deming’s Plan-Do-Check-Act (PDCA) cycle for product improvement, and then gets into why it’s imperative that managers not be constantly interrupting work with “a barrage of text messages, email, IMs and the like.” And Jack really doesn’t like the “open office” concept, which he sees as a productivity destroyer.

Managers must find ways to enhance, not destroy, productivity.

He then gives a shout out to the importance of management placing an emphasis on quality.

Whether you’re a manager or not, Jack’s piece is well worth a quick read as a reminder of the value of good management and the responsibilities that managers have.

Jack Ganssle’s “How embedded projects run into trouble”: Reasons 8 and 7

A couple of weeks ago, I posted a summary of Jack Ganssle’s first two articles (from a series currently running on embedded.com) discussing reasons Ten and Nine why embedded development projects “run into trouble.”  Here are reasons Eight and Seven:

8 – The undisciplined use of C and C++

As embedded systems developers we quite naturally rely on C and C++ (along with C#, Assembler, Java, Ada, and a variety of scripting languages including Linux shell scripts, Perl scripts, Windows batch files, by the way…). So I was quite naturally intrigued by what Jack has to say about their use.

First off, he reminds us that language-wise, C is right up there with Latin, dating back to 1972. (Which, he points out, makes it older “than most embedded developers”. Just not me.) He notes why C and C++ are so attractive:

C is uniquely suited to deeply-embedded work as it excels at manipulating bits, bytes and hardware. It’s a great choice for real-time work as it doesn’t have some of the problematic features of more modern languages, like garbage collection. C++, of course, brings the advantages of object-oriented programming to the discipline.

But then goes on to talk about why he believes that using them can be dangerous, especially for those lacking in expertise.

C/C++ have unspecified, implementation-defined, and undefined behaviors. Lots of them…There are plenty of ways to get stung even outside of these behaviors. Consider dynamic memory allocation. That’s a very useful feature for RAM-constrained systems. But it’s fundamentally non-deterministic. Malloc and free, malloc and free, and there’s no way to prove that some strange combination of inputs won’t cause enough heap fragmentation that the requested allocation fails.

When there’s no choice but to use dynamic memory, a disciplined use of C is to check malloc’s return value and take some action if malloc was unable to do its job. Yet I almost never see that test performed.

Jack then dives into the perils of pointers. Definitely worth reading him for the warnings he lays out. He ends his article by underscoring that he is, in fact, a proponent of C/C++. Just that he’s against their undisciplined use. Anyway, I’d like to point out that we, too, are proponents of C/C++ and very much committed to their disciplined use.

7. Bad Science

This article is a very cautionary discussion on the importance of understanding the science underlying the products we’re developing embedded systems for. Take nothing for granted, he warns. Don’t assume you get the science. Delve into it and make sure that you’ve got things straight.

His illustrations are interesting. For starters, he discusses a system built in the wayback that used IR light to measure grain proteins.

Spinning filters created 800 distinct wavelengths which impinged on the sample. Lead sulfide detectors read the reflected signal which was digitized and handled by an 8008.

Did you know lead sulfide detectors are more sensitive to temperature variations than to light? We didn’t. The data was garbage, and for months we sought answers before learning this simple fact. More months were lost as we tried cooling them with Peltier plates… and they got too cold. The final solution was a combination of carefully-controlled Peltiers and heating elements.

He provides another example that involves using carbon tetrachloride. Bad decision:

We didn’t know that carbon tet is quite toxic when inhaled; OSHA nailed us.

The substitute they used just plain didn’t work.

The solution to these and other bad science problems: do your research.  For Critical Link, that means working in close partnership with our clients to make sure we understand the underlying science behind their applications, and the implication this has for the embedded solutions we develop for them. (I’m going to do a bit of a brag here: we’re really good at this.)

Jack’s series is really a must-read for engineers in the embedded space, and I’ll be posting summaries of the other reasons over the next couple of months.

Jack Ganssle’s “How embedded projects run into trouble”: Reasons 10 and 9

A couple of weeks ago, embedded.com started running a series by Jack Ganssle that gives Ganssle’s Top Ten list of “How embedded projects run into trouble.” As an engineer whose worked on a lot of embedded projects over the years, and whose company specializes in them, this series was obviously going to be one that caught my eye. (Not that any project I’ve ever worked on has run into any trouble…) So far, Jack has published articles on his Numbers 10 and 9. Here’s a summary of his thoughts on some common reasons projects go awry:

10 – Not enough resources allocated to a project

I suspect this is a situation we’ve all gotten involved in at some point during our careers. Jack cites a number of culprits here, including failure to invest in tools that help boost productivity. Not a problem at Critical Link: we’re big believers in providing our developers with the tools they need. He then talks about understaffing, pointing out that it “always results in higher costs.” This problem is compounded by the “mundane activities” that engineers can get caught up in “that were once handled by lower-wage workers.”

I like his point about the importance of management support:

A crucial role of a boss is to shield the team from anything that distracts them from getting the work done.

And Jack’s final point really resonates, where he discusses developing firmware, and whether to do it in-house, outsourced, or buy something off-the-shelf.  Sometimes it may seem cheaper to do something with the resources on hand, rather than paying out of pocket. He uses an example in which a colleague told him that:

… his company couldn’t afford the $15k needed to buy a commercial RTOS, so his team would write their own. I checked: the commercial version was 6000 lines of code. Multiply that by $20-$40/line [industry estimate for developing firmware code] and that $15k looks like a blue light special.

This seems to be as much about investing in tools as it is about hiring consultants, but point taken:

Penny pinching always winds up costing more.

Amen to that!

9 – Jumping into coding too quickly

The Ganssle series isn’t completed as yet, but I’m thinking that this point would be closer to my Number 1 than the Number 9 that Jack gives it. At Critical Link, our experience as systems engineers means we place tremendous importance on planning and being careful about our requirements. Here’s a number that he uses:

PMI’s Pulse of the Profession states that 37% of software project failures are due to inadequate requirements.

He notes that there’s a universal

…propensity to start coding as quickly as possible, even well before the functionality and design have stabilized. Yes, in some cases it’s impossible to know much about the nature of a product until a prototype exists, but generally we can get a reasonably-clear idea of what the requirements will be before writing C.

Jack also points out that:

Embedded firmware is a very different sort of beast from other software. We must specify the hardware early in the project, and that can’t be done unless the team has a pretty good idea of what they will build. Get it wrong and costs balloon. Schedules fall apart.

It IS hard to elicit requirements. Sometimes maddeningly difficult. But that doesn’t justify abdicating our responsibility to do as good of a job as possible to figure out what we’re building.

Jack then lists the major reasons “why requirements get shortchanged.” Worth a look at the article.

Embedded development is my territory, so I’m looking forward to the continuation of this series – and to see what Jack’s Number 1 is. Over the next few months, I’ll be posting on the other articles.  Stay tuned!

Here we are, still talking about Moore’s Law

Moore’s Law seems to be a perennial topic in the tech press. (We’ve done our share of covering it as well. Here’s a 2017 post on the subject, and here’s an earlier take from 2014.)

For anyone who needs a reminder, Moore’s Law – which is now more than 50 years old – can pretty much be summed up as the number of components in an integrated circuit will double every year or so (without any increase in costs). Moore’s Law held for quite a while, but many observers of late have noted that we will eventually reach a point where doubling the number of components on a chip will become physically impossible. Which is not to say that IC’s continue to do more, even if the fancier things they’re doing may be pricy.

But people continue to argue Moore. And one of them is Intel’s CTO Mike Mayberry, who had an interesting piece on the subject a couple of weeks back on EE Times.

The thrust of Mayberry’s article is not to debate whether Moore is dead. It’s to offer a loo

For Mayberry, the future of Moore starts with CMOS scaling.

CMOS scaling is not yet done, and we can see continued progress as we improve our ability to control fabrication. We are not so much limited by the physics as by our ability to fabricate in high volumes with high precision. It is difficult, but we expect it to continue.

Here’s Mayberry’s illustration of what the next generation of Moore’s Law looks like:

Mayberry spends a bit of time describing what Intel’s up to with 3D and with technology that “can drastically improve power-performance.”

Unfortunately, [these new devices] are not a simple replacement for CMOS. So we expect to integrate these in a heterogeneous manner, likely as layers, combining the goodness of scaled CMOS along with novel functions provided by these new devices.

He also mentions that those “novel architectures” are designed to handle the data explosion that’s been going on (and is only, in my view, going to continue to accelerate).

Mayberry’s bottom line on Moore’s Law:

Pulling all this together, we expect the economic benefits of Moore’s Law to continue even as the elements look different than when Moore made his original observation. We need not be distracted by the debate but should continue to deliver ever-better products over the next 50 years.

Whether what Mayberry envisions is the future of Moore’s Law or not, I think that one thing we can count on is IC’s becoming more and more powerful.

The Spyce of Life, Robotically

My fascination with robotics goes back a long way – to when I was in grad school. Robots back in the day couldn’t do all that much, just a few mechanical things done in a way that made them look, well, robotic. As it has in virtually every other arena, the technology has advanced dramatically over the years. Anyway, knowing of my interest in things robotic, a friend in Boston recently wrote to me about a restaurant there that’s using robots.

Spyye has humans who take care of the finishing touches, but robots are making the food.  Here’s what they have to say for themselves:

At Spyce, we’ve created the world’s first restaurant featuring a robotic kitchen that cooks complex meals. We created this concept in hopes of solving a problem we found ourselves facing, being priced out of wholesome and delicious food.

The problem solvers were MIT students when they started working on Spyce, and somewhere along the line, they enlisted some help from a Michelin-rated NYC chef to help them come up with the menu, which is a range of rice bowls along various themes – Indian, Thai, Latin, etc.  And because they’re MIT guys, they invoke the name of Nicola Tesla, who invented the induction technology they use to cook the food.

Here’s a description of the concept and how it works:

“While we expected many people to come to the restaurant at first because of the novelty of the robot, the real benefit of our robotic kitchen comes from the quality of meals we are able to serve,” co-founder and lead electrical engineer Brady Knight told Digital Trends. “Being that our robot does the portioning and cooking, we can ensure the meals are being made consistently and accurately. Another advantage is that our technology allows employees to focus on creating more meaningful connections with our guests.”

When customers enter Spyce, they are met with a human guide who shows them to a touchscreen kiosk where they can place their order. This order is then sent to the kitchen — which is visible to the customers — where the food is prepared by robots. Finally, it’s handed over to a human employee to add garnishes like cilantro or crumbled goat cheese, before being distributed to the customer. (Source: Digital Trends)

Wish there were more info about the technology they’re using, but I couldn’t dig much up.

My Boston friend has now been to Spyce twice, and she vouches for the food, giving it high ratings for tastiness and portion size. She did note that, unlike really fast food, it does take a while because it’s all prepared to order. Next time I’m in Boston, I’ll be heading there to check it out. As I said, robots have come a long way since I was in grad school. Fortunately, they haven’t yet replaced us humans – even at a robot-driven restaurant like Spyce.

A Full Metal Doctor’s Jacket

Of the great pleasures of being part of the wonderful world of technology is seeing the many brilliant uses that people make of it. When I look at the types of applications our clients are working on, I get to see how technology is harnessed to make manufacturing more productive, transportation more reliable, defense more accurate, and medicine better able to save lives. It’s very gratifying to know that Critical Link is playing a critical role in so many applications.

Whether it has anything to do with Critical Link or not, however, I’m always on the lookout for stories on how innovative technology can be beneficial. One application I recently came upon was written up in The Economist.

A scientist in the UK has perfected a way to make the clothing doctors and nurses wear “germ-proof.”

Gold and silver have properties that are antibacterial and are used as coatings for some medical implants. But metallic coatings haven’t worked so well on fabrics: they’re quickly washed away.

Dr. Liu Xuqing has devised a process that enables antibacterial metallic coatings to cling to fabrics. And his metal of choice is copper, which is cheaper than gold and silver and thus more practical for scrubs and other medical clothing that’s frequently laundered.

Working with colleagues from two Chinese institutions, Northwest Minzu University in Lanzhou and Southwest University in Chongqing, Dr Liu has been treating samples of fabric with a chemical process that grafts what is called a “polymer brush” onto their surfaces. As the name suggests, when viewed at a resolution of a few nanometres (billionths of a metre) through an electron microscope, the polymer strands look like tiny protruding bristles. That done they use a second chemical procedure to coat the bristles with a catalyst.

After this, they immerse the fabric in a copper-containing solution from which the catalyst causes the metal to precipitate and form tiny particles that anchor themselves to the polymer brush. Indeed, they bond so tightly that Dr Liu compares the resulting coating to reinforced concrete. Yet the process takes place at such a minute scale on the surface of the fabric that it should not affect the feel or quality of the finished material. (Source: The Economist)

The fabric kills e coli and staph, and makes it through more than 30 washes.

Dr. Liu is also exploring other uses, including making:

…conductive threads that could form part of electrical circuits woven into clothing. Such circuits might, for instance, link sensors that monitor the body. They might even carry current and signals to other fibres, treated to change colour in response, to produce fabrics that vary in hue and pattern—maybe to reflect, as detected by sensors, the wearer’s mood.”

I don’t know that I necessarily want to be able to read someone’s mood from the shirt they’re wearing. I think I’ll stick with watching someone’s face and body language, and listening to their tone of voice!

But full metal doctors’ jackets that prevent germs from getting passed around – this is one innovation that once again demonstrates why technology is such an exciting field to work in.

Celebrating the Fourth

It’s the Fourth of July, and that means fireworks.

Five years back, in my Glorious Fourth post, I wrote about “how fireworks displays have become so computer-driven over the years.” Here’s a bit about the technology involved:

Before a show can be choreographed, the properties for each shell must be logged, including burn time, lift time and effects. Microchips embedded in the shells trigger the fireworks to explode at a specific height, in a particular direction and with millisecond precision.

Curious about what technological changes have occurred since 2013, I did a bit of googling around.

Two years ago, Jason Daley, writing for Smithsonian Magazine, pointed out the fundamentals haven’t changed much over the years. And he was talking about between 600 A.D. and now, let alone since 2013.

Fireworks in the 21st century are still essentially the same as they ever were— a shell full of gunpowder that launches a payload of black powder and chemically treated “stars” into the sky. (Source: Smithsonian Magazine )

Having said that, Jason does go on to list some of the changes he sees on the fireworks horizon.

Some may ask what the point of fireworks that don’t make loud noises, but some locales are no longer allowing booming displays. Which has led to “quiet fireworks,” which just means designing shows with shells that don’t create as much noise pollution.

He also mentions daytime fireworks.

That means making colors brighter and even adding other display options like Flogos, corporate logos or designs made out of foam bubbles.

Maybe it’s just me, but I don’t quite see the point of daytime fireworks. There’s something about lighting up the night sky…

One thing that fireworks companies haven’t been able to come up with is a deep and consistent blue. That’s because the copper compound required for blue tends to break down when the temperatures rise. But with better temperature control, the prospect of getting a “more stable blue” is on the horizon.

3-D choreography is another advancement that’s emerging, thanks to computer simulations and 3-D modeling.  Sounds interesting – and something to look forward to.

Last July, Conor Cawley on Tech.Co had a more technology-based piece entitled “5 Ways Technology Is Making Fireworks Safer and Awesomer.”

Pixelburst Shells can generate fireworks in shapes that go beyond the chrysanthemums and waterfalls we’re all used to, “thanks to a microchip inside the fireworks, they can program them to do just about anything that only requires a few pixels.”

Bluetooth lets technicians set off a display, without their having to be anywhere near a fuse or gunpowder, which eliminates a lot of the lost-finger risk associated with fireworks.  Like Jason in the Smithsonian article, Conor gives a shout out to simulation software which let technicians plan and test out complex, elaborate displays. Another safety element in fireworks is that the quality of the mortars themselves has improved, with high-density fiber replacing the dangerous cardboard tubes.

The final bit of technology that Conor sees as essential to a “safer and awesomer” Fourth of July is one that’s been around since well before the Chinese came up with gunpowder 1500 years in the way-back. And that’s the human brain. As he puts it:

Don’t light M80’s off in your hand. Don’t point Roman Candles at your friends. And most importantly, don’t do anything stupid.

Not much to add to that, in terms of safety warnings.

So I’ll end with wishing everyone a wonderfully glorious Fourth of July.

 

 

The importance of supply chain management

We don’t often do crime-related posts, but this story is an important one, and underscores how vital supply chain management is to us and our customers.

As folks in our industry are well aware, lead times for some components – especially memory and processors, including FPGAs – have become so bad that many board suppliers are having a difficult time getting products built in a reasonable time frame. This can lead to suppliers buying products from unauthorized sources before they are properly vetted.

This situation has been in the news after the owner of PRB Logics Corporation, Rogelio Vasquez:

…was recently arrested and charged with bootlegging counterfeit chips, some of the which could have been used in military applications, federal customs officials said. (Source: James Morra in Source Today)

PRB Logics is a small distributor of obsolete components and excess inventory. As such it plays a role in the overall electronics supply chain.

…counterfeits often sneak into the supply chain through independent distributors that fail to screen them for authenticity. Several years ago, industry analysts estimated that counterfeit chips represent $169 billion in potential risk per year for the global electronics supply chain. The counterfeit threat is serious enough that the United States introduced a bill last year to stem the flow of electronic waste to other countries where the chips inside could be extracted, repackaged and sold as genuine.

While PRB Logics may not be a big deal, the charges against Vasquez are. According to the US Immigrations and Customs May 2018

Rogelio Vasquez is charged in a 30-count indictment that alleges he acquired old, used and/or discarded integrated circuits from Chinese suppliers that had been repainted and remarked with counterfeit logos. The devices were further remarked with altered date codes, lot codes or countries of origin to deceive customers and end users into thinking the integrated circuits were new, according to the indictment. Vasquez then sold the counterfeit electronics as new parts made by manufacturers such as Xilinx, Analog Devices and Intel. (Source: ICE)

There is an extreme disadvantage when using false components. As the Source Today article stated:

… the chips [Vasquez] counterfeited could be used in things like missile defense systems and satellites. Using salvaged, substandard or incorrect components can increase the risk that these systems will fail, and that could lead to classified information being leaked or military operations being interrupted, ICE said.

Our customers can rest assured that Critical Link takes an approach to component suppliers that means that we would never compromise on source selection. We are ISO 9001:2015 certified, a designation that we achieved in May, and a bump up from our previous ISO 9001:2008 certification. As such, we carefully vet every supplier to ensure they meet our standards for quality, including our counterfeit parts policy. And because we pipeline material well ahead of demand, our customers benefit from being able to order and receive parts from us when others in the industry have challenges delivering.