Skip to main content

Update on the Altera-Based Mity

Since a lot of you have been asking, and there is great interest in this new Critical Link product, I wanted to give a brief progress report on where things stand.

First off, to lay the groundwork, we’re coming up on shipping  our first Altera-based SoM – the MitySOM-5CSX, which is based on the Altera Cyclone V SoC. We believe that this new SoM will provide you with an excellent choice for high-throughput applications, and applications requiring a tight integration of ARM Cortex-A9 and FPGA fabric. (You can read more about it in this earlier post.)

Anyway, we are very pleased to announce that the SoM is up and running, the baseboard is in and up, and we’ve been through basic hardware bring up and checkout on both boards (to varying degrees depending on the board). The system boots through uBoot to a Linux prompt! While we’re still hammering away on the DRAM to insure that signal integrity is where it needs to be, the memory is, so far, looking good.

As always, we’re doing a lot of testing along the way, and are continuing to bring up (and test) the interfaces.  So far, we’re good to go with a number of interfaces, including the gigabit Ethernet, which is showing excellent performance.

We’re still on track for a September release, and availability through Arrow Electronics before the third quarter is out. So stay tuned. We’re very excited about this product and hope you are too!

Expanding LinuxLink support

We’re happy to let you know that we now have Linux Link support for the MityDSP-L138(F), our System-on-Module (SoM) based on TI’s dual-core OMAP-L138 DSP + ARM9 processor. LinuxLink is a Linux toolset from our partner Timesys, and enables quick and easy design, development, deployment and debugging of an embedded product based on open source Linux.

The MityDSP-L138(F) which has an optional Xilinx FPGA – that’s what the (F) stands for –  is ideal for cost-sensitive applications that need fixed function co-processing and low power consumption. The markets where we’re seeing demand for it include industrial automation, communications and telecom, and medical.

Interested MityDSP-L138(F) developers can head over to Timesys to download a free board support package and matching software development kit (BSP/SDK). They can get support for build and boot issues just by registering with Timesys. (There’s also a LinuxLink Pro Edition for those who need maximum customization.)

In the works: LinuxLink support on MitySOM-1808, a SoM-based on TI’s Sitara AM1808 processor. LinuxLink is already available on the MitySOM-3359, which is also based on a TI Sitara processor.

With embedded Linux support, our customers get an additional jumpstart to the one that working with a SoM provides, even further accelerating product development.  And in today’s ultra-competitive environment, we all know how important it is to get applications to market fast.

To learn more about LinuxLink for the MityDSP-L138, click here.

Partner Focus: LS Research

When we build our SoM’s, we rely on our partners for the best available components.  For WiFi, that partner is LS Research.

LS Research focuses on advanced embedded wireless solutions for the industrial, medical, consumer, and defense markets, which are pretty much the markets that Critical Link serves, as well.  We embed their TiWi WiFi (802.11 b/g/n) and Bluetooth module – which is FCC/IC/CE/C tick certified –  in the MitySOM-335x Display and WiFi Development Kit.  As more and more applications start using wireless communications, you can expect to see more wireless support for our SoMs. (For starters, we will be announcing shortly that WiFi will now be available onboard the MitySOM-335x SoMs, which means less hardware work for our customers developing wireless apps.)

Anyway, when it comes to WiFi, LS Research are the experts. (And, like Critical Link, they’re a Platinum Partner of the TI Design Network, which is a highly selective group of companies that work with Texas Instruments’ processors.)

We want to give them a shout-out today to let you know that LS Research has a webinar coming up that might be of interest.

“Are You Bluetooth Smart” is being held on Thursday, August 15, at 11 a.m. Central, and will focus on Bluetooth Low Energy – which is not just a low energy version of Bluetooth Classic, but a different protocol entirely.

The webinar’s free, and you can register here.  If they ask, you can tell them that Tom Catalino sent you.

DRAM and DRAM Controllers

Late last year, Ron Wilson, who is the Editor-in-Chief for our partner Altera, posted a great article  – DRAM Controllers for System Designers – that provides a deep dive into DRAM controllers, and how they work.  As with every other element of electronic system design, you need to get this one right.  As Ron says,

The proper operation of the DRAM controller can make the difference between a system that meets its design requirements and a system that runs too slow, overheats, or fails. Either way, ultimately the system design team—who often have little access to information about the controller—will bear the responsibility.

Ron’s article delves into the full complexities of having an advanced DRAM controller playing intermediary between the innards of a system-on-chip (SoC) and external DRAM, and concludes with some suggestions for systems design.

I’m not going to attempt a summary of the article here – it is most definitely worth a full read for anyone who wants to gain a deeper understanding of DRAM and DRAM controllers: how DRAM works, and the complexity involved inside of DRAM and a DRAM controller. Most of us who design electronics products don’t typically have to get in at this level, but it’s always good to know what’s going on down there. And to know that if you require control over how the DRAM controller functions, you have the option to do so if you work with an FPGA or design your own ASICs.

TI’s New NFC Interface Transponder

Last month, TI announced a new near-field communications (NFC) interface transponder.

The new Dynamic NFC Transponder Interface RF430CL330H is low cost, bringing a secure, simplified pairing process for Bluetooth® and Wi-Fi connections to products, such as printers, speakers, headsets, and remote controls, as well as wireless keyboards, mice, switches and sensors. It is the only dynamic NFC tag device designed specifically for NFC connection handover and service interface functions, including host diagnostics and software upgrades.

At the same time, they also announced the NFCLink software firmware library, so that developers can quickly create NFC apps.

As the Internet of Things continues to expand rapidly, and as more and more products – both consumer  (speakers) and non-consumer (medical devices) – are Internet-enabled, chips like the Dynamic NFC Transponder Interface will become increasingly more important. They’ll help application developers build products that really make WiFi setup easy and foolproof. While consumer applications have been getting easier and easier to use – a couple of months ago I blogged about the NEST thermostat, which was pretty darned easy to set up – they can still be challenging for non-technical types.  Even for technical types, it can be cumbersome and painful, as when you have to put in WiFi security credentials with strings of dozens of characters, often into an interface that’s not easy to use.

When your phone and WiFi router are NFC-enabled, you’ll just be able to tap the phone to the router and the credential information will be transferred immediately.  Your smartphone will become something of a magic wand. No more set up hassles, you’ll be able to just tap and go to introduce your phone to your stereo, your printer, your HVAC controls and they’ll make the connection for you.

Much as I like doing-it-myself – come on, I’m an engineer – the simplification of near field communications is something that I welcome with open arms.

Ten Software Tips for Hardware Developers

Earlier this year, I saw an excellent article by Jacob Beningo on EDN that provided a list of software tips for hardware engineers. It started by making the point – which those of us at Critical Link, and our customers, certainly understand – that those who design embedded systems need to understand not “just” the hardware end of things, but also how the software will work with the hardware.

As engineers, we tend to sort ourselves out – we’re hardware folks, or software folks – but in the real world, as the article points out, we are typically called upon to wear multiple hats.

That doesn’t mean that we have to become masters of every technical domain. That’s just not possible, especially in this day and age. There’s just too much to “know”, and it all changes so fast.

Still, most of us can’t go through our professional lives mono-focused on just one aspect of technology, and Jacob’s list provides some good guidelines for us hardware folks.

Here’s what’s on his list, along with a thumbnail description of his point.

  1. Flowchart First, Implement Second.  In other words, don’t just start coding. Think things through first. The point Jason makes is that you naturally do this as a “hardware guy.” You should also do it when you put the software hat on.
  2. Use State Machines to Control Program Flow.  Again, this is an aide to thinking things through. In this case, thinking through all the states and transitions that your code needs to cover. Interestingly, this is a variation on a theme that my colleague Tom Catalino posted on a few weeks back, when he wrote about state machine diagramming.
  3. Avoid Using Global Variables.  This is a change in thinking from the old days when the use of globals was widespread (and recommended). With object-oriented programming, however, best practice calls for defining variables “in the smallest possible scope and encapsulated to prevent other functions from misusing or corrupting the variables.” (Think globally, code locally.)
  4. Take Advantage of Modularity. This approach lets you build up function libraries that can be reused, and is also helpful when it comes to maintaining and upgrading your applications. (It may take a little longer at first, but it’s good for keeping costs down in the long run!)
  5. Keep Interrupt Service Routines Simple.  As Jacob points out, even with today’s fast processors, the overhead involved with interrupts should be kept to a minimum to that you’re not taking anything away from what the primary code is trying to accomplish.  Set a flag and get out as quick as you can, and let a regular thread do the actual work.
  6. Use Processor Example Code to Experiment with Peripherals. Take advantage of the example code that the chip manufacturers provide. This isn’t code you can just drop into your app, but using it may help you avoid some dead ends.
  7. Limit Function Complexity.  The more complex the tasks are, the harder it’s going to be to maintain the code. Jacob suggests using cyclomatic complexity to help you measure this.  (You’ll probably have to Google that one!)
  8. Use a Source Code Library and Commit Frequently.  This may seem like one more pain in the neck step to worry about, but version control is critical in software development. The ability to go back in and check out earlier code that works is really priceless.  This can be a lifesaver!
  9. Document the Code Thoroughly. This is another one of those pain in the neck steps, but unless you have a perfect memory for what was going through your mind when you were doing your coding and you’re going to be the only person who ever has to upgrade and maintain it, documentation is essential.  When I was young, I could look back at undocumented code I wrote and it was like I just wrote it.  Nowadays, I know those comments are for ME!!
  10. Use an Agile Development Process. In software development, that means focusing on the highest priority tasks first, and having an iterative development process. This lets you adapt more quickly when requirements change.  (Not that THAT ever happens…).

There’s a final point on Jacob’s list, which recommends that you keep on top of the latest tools and techniques. Good advice whether you’re developing hardware, software, or both.

The full article is definitely worth reading. Here’s the link.

Open Source is good for you. So’s proprietary code.

A couple of months ago, at a tech conference, I got into a conversation on whether there are quality differences between open source and proprietary software. This has long been grounds for debate, with one side claiming that open source is better while the other side maintains that, with open source, you get what you pay for.

My “debate partner” – that’s in quotes because we really weren’t taking sides, just having an open (source) conversation – referred me to a study that Coverity does each year:

For those not familiar with the Coverity Scan™ service, it began as the largest public-private sector research project in the world focused on open source software quality and security. Initiated in 2006 with the U.S. Department of Homeland Security, Coverity now manages the project, providing our development testing technology as a free service to the open source community to help them build quality and security into their software development process. The Scan service enables open source developers to scan–or test–their code as it is written, flag critical quality and security defects that are difficult (if not impossible) to identify with other methods and manual reviews, and provide developers with actionable information to help them to quickly and efficiently fix the identified defects. (Source: Coverity Scan: 2012 Open Source Report, which can be downloaded here, here once your register).

The study now goes beyond open source code, and now scans projects that are developed open source or proprietary code. Here are their results:

Open source chart

As you can see, or not see – that snapshot’s pretty blurry (sorry about that) – there’s not much qualitative difference between open source and proprietary code: a defect density of .69 for open source, and .68 for proprietary.  (Defect density is the number of defects per 1,000 lines of code. According to Coverity, a defect density of 1.0 is “considered good quality software”, and the code base they analyzed is well below that on both sides.)

I actually wasn’t surprised by this. At Critical Link, we’re part of the open source community, and I know that when we submit code that code goes through a thorough vetting process.  (Which is not to say that there’s not some sloppy code floating around out there.)

So if free open source is such a good deal – it’s free, after all, and a lot of it is of high enough quality to be used in commercial development – why would anyone actually buy open source?

One of the most important reasons why someone would want to purchase open source software would be for the support.  While there is community-based support for open source code, it’s not necessarily going to be there when you need it. It’s also the case that you may not be able to get updates that would be useful to you, since the coders may have wandered off to new projects that have caught their attention.  These things matter when your work is time critical. They also matter if you’re less experienced with the open source code base you’ve adopted, and can’t easily figure things out, make bug fixes, and create enhancements for yourself.

Another advantage of going with a proprietary software solution is that the vendor you work with may also provide you with development help, from design through coding. This is the case with our partner, Timesys, which offers LinuxLink, an embedded software development framework, along with professional services to go along with it.

So, yes, open source code can be good, but there are plenty of situations in which you’re better off going the proprietary route.

 

The Glorious Fourth

As we gear up for the Fourth of July, I quite naturally got to thinking about fireworks. And how fireworks displays have become so computer-driven over the years.  No more lighting a match to set off the Spider or the Chrysanthemum on the town green while the band plays a Sousa march from the gazebo. These days, especially for the major fireworks shows – the extravaganzas that get televised – everything is choreographed (if that’s the right word) by computer.

I came across an interesting piece on the computerization of fireworks on Intel Free Press,

There I learned that the first use of computers with fireworks occurred in the late 1970’s, when a computer “struck” an electric match to set off the fuse that started the show, a safer alternative to flicking a Bic and running for cover. Since then, computer use has become far more sophisticated.

Before a show can be choreographed, the properties for each shell must be logged, including burn time, lift time and effects. Microchips embedded in the shells trigger the fireworks to explode at a specific height, in a particular direction and with millisecond precision.

One of the enhancements that’s coming on is timing chips that will enable more use of letters in displays:

Though letters and numbers have appeared in larger shows over the past few years, often for countdowns and spelling out the patriotic abbreviation in Lee Greenwood’s anthem, “God Bless the U.S.A.,”[Jim] Souza [CEO and president of Pyro Spectaculars] said the industry hasn’t yet perfected the pyrotechnic equivalent of skywriting.

“We call it ballistic fireworks technology, and it takes a number of shells to break in sequences to get it correct,” Souza said. “

While computers have certainly made fireworks displays more complex and entertaining over the years, sometimes fireworks go ballistic on their own.

Last year, the Fourth of July fireworks in San Diego went off with a bang – a big bang – when, just before the show was scheduled to begin, the fireworks barges exploded. The problem was blamed on a computer programming error. (Source: KPBS TV.)

I wouldn’t want to have been that programmer…

Anyway, I don’t (yet) need a computer to flip some burgers on the grill, which is how we’ll be celebrating the Glorious Fourth at my house.

Wishing everyone a happy (and safe) Fourth of July.

The Internet of BIG, Industrial Things

While the idea of the “Internet of Things” (IoT) has been around for a while, as everything (except, perhaps, human beings) keep getting smarter, the buzz around IoT is on the rise. I recently blogged about the NEST Thermostat, and a while back I asked whether we’re smarter than the average basketball. (You don’t have to answer that.) I’ve had a Fitbit for some time now, and recently added their Aria WiFi enabled scale to the bathroom.

But while a lot of the ooh-ing and aah-ing has been around being able to turn on the porch light and let the dog out using your smartphone, there’s been a parallel universe growing, which we can think of as “The Internet of BIG Things,” or “The Internet of Industrial Things”or “The Internet of BIG, Industrial Things.” And this Internet of Things is growing up around the development of medical, scientific, and industrial applications that increasingly incorporate WiFi, rather than relying solely on more common wired communications interfaces like Ethernet, Serial, USB, etc.

We’re certainly starting to see demand for WiFi support from our customers, so we’re adding WiFi to some of our SoM’s, starting with the MitySOM-335x.

Interestingly, adding WiFi really brought home the value to our clients of integrating SoMs into their apps, rather than building their own via ground-up efforts. When it came to WiFi, we’re not rolling our own. We’re using a module from LS Research, since they have expertise in WiFi that we don’t. WiFi is tricky – as much an art as a science. Plus there’s the added complexity of having to get certified by the FCC, which LSR has taken care of.

Anyway, while those of us at Critical Link have been focusing on the Internet of BIG Things, I came across an interesting article on Wired’s Gadget Lab. In Welcome to the Programmable World, Bill touches on the gamut from the kitchen coffee pot to the shop floor, and describes just how the “programmable world” will become fully realized. It’s worth a read (as are the comments, many of which take Bill on).

The Internet of BIG, Industrial Things. You heard it here…

State Machine Diagramming. (This is why we do it.)

As a computer science engineering student, one of the early things I learned was the use of state machine diagrams. Following the state machine methodology when doing design work – whether for hardware, a communications protocol, or the software implementation of a communications protocol   – has proven invaluable throughout my career. Because I’m a big fan of this approach, I was very interested in last week’s article by Dan Harres over on EDN, “State Machines Ease Programming Microcontrollers.”

For those who aren’t familiar with the state machine process, it’s actually the creation of a diagram (with a math overlay) that sets out all the states and transitions that need to be included in your design.

The example that Dan uses is a simple one – but complex enough to be interesting and to be able to demonstrate the merits of using the state machine methodology.

For his article, Dan’s designing a robot that doesn’t have to do all that much except turn to avoid obstacles. Its electronics are sensors and amplifiers, a microcontroller, and steering and propulsion motor drivers.  With these elements in place (at least on paper), it’s time to list out all the possible states the robot might find itself in: moving straight ahead, turning left, or turning right. (No stopping, no slowing down, no turning around. As I said, this is a simple example.)

Anyway, without attempting to give away – or rewrite – Dan’s article, he walks through the process of diagraming the possibilities, which involves coming up with an initial state machine diagram, then thinking it through and realizing that, on the first round, he hadn’t thought through all the possibilities.

When thinking through my state machines, I’ve often found that they get revised in many ways before they’re complete. A good, careful analysis of all the states that are needed in your design is very important. Through the analysis process, you sometimes find that you need additional states, sometimes intermediate states to handle certain situations.

What Dan’s article shows – and what has always been my experience – is that the state machine methodology provides an excellent visual depiction of the how the system you’re designing will operate, allowing you to thoroughly analyze your problem. It’s a true aide for making sure that you have a system reaction to each input, from every state. With your state machine diagram in hand, you’ve covered a lot of ground towards creating a robust design and a roadmap for implementing it.

Dan’s article goes into a lot of detail, and is as good as any I’ve seen for describing how this methodology works. I note, however, that even though the article is targeted towards microcontroller programming, the methodology is applied regularly in more complex microprocessor based embedded system designs. In fact, I’ve only used in on more complex systems, since that’s where the bulk of my work has been.