Over the past several months, we've seen situations arise on two of our platforms that ended up highlighting how nice it is to have some ability to "tweak" your on board power system. I've written two blogs in the past both referencing the programmable architecture we've used on several platforms over the years (linked below).
I won't rehash a lot of what I like about using these types of devices, but suffice to say I'm a big fan of telemetry in our power systems. A couple of situations recently have highlighted another great benefit of programmable configuration, design modification. Let's start with the UltraZed EV platform.
Xilinx is a close partner for Avnet and as such we often see devices and designs long before they become public. To make sure products are available when the devices are announced to market we often begin designs before the silicon is completely revised for production. Such was the case with the UltraZed EV system on module. The UltraScale EV device has a dedicated video core. This core has a separate power supply which we call the VCU. Early in the development stages this voltage was set to 0.85V. During device testing and characterization, Xilinx determined that 0.9V was the optimal voltage for top performance of the interface. Unfortunately for Avnet, we had already designed the board and set this voltage to 0.85V. Now a quick disclaimer, operating this interface at 0.85V does not cause any damage to the device. This was verified with Xilinx. The difference in performance may only be perceptible at the speed and data rate limits of the device. What that means is that we wanted to update the boards moving forward as well as give customers a path to "fix" the voltage on their own systems, but at the same time there is no risk of damage if the output isn't changed so it isn't required for boards in the field. Here is where having the programmable architecture was great. Rather than needing to replace an IC, set resistors, compensation components, etc, we were able to simply adjust the voltage using the Infineon PowIRCenter GUI, save the changes to the device, and we were done. Ok, lots of documentation, testing, updated production programming files, etc, but the "fix" was done. The ability to make that change digitally saved us from the costly decision of whether or not to spin the board to change that voltage.
Another example comes from the Ultra96-V2 design. When we moved from V1 to V2 we changed the architecture over to the Infineon PMIC based solution. Recently we've seen an increase in Deep Learning Processor Unit (DPU) usage within the Xilinx device. This IP core is a very intense design and as such is one of the higher power consuming usage cases we've seen. While qualifying platforms for DPU use, we discovered that V2 was resetting during operation at higher frequencies. V1 performed better, but was still somewhat limited. Here is a capture of a particular design that my colleague created around an AI platform that shows resource utilization.
The course itself (titled Introduction to Deep Learning with Xilinx SoCs under Ultra96 Advanced Courses) is available on demand at - http://avnet.me/ttc_on_demand
The higher than expected power consumption was a result of the resource utilization coupled with the high toggle rates. What we discovered was that the Vccint on the V1 design was provided by a shared supply whereas V2 had a dedicated output purely for Vccint. By happy coincidence V1 worked better because it was drawing additional current from the shared supply that wasn't available from the dedicated V2 output. The first step was verifying that power limitations were contributing to the board failures. Sure enough, I was able to monitor the Infineon GUI during operation to see that indeed the current was exceeding the fault threshold set on the output. As the clock rate was increased above 500MHz, I would see the current draw spike on the Vccint rail above 4A. The good news is that the protection was operating as designed, issuing a power on reset when the current exceeded the 4A fault threshold. The steady state current remains under 4A, but the periodic peaks were triggering the fault reset. Based on that discovery I adjusted the warning and fault thresholds on the output and re-ran the tests. I was able to open the margins up wide enough to support the DPU without failure, while also remaining within the safe operational limits of the device. Now to be fair, this change was only possible because we designed in plenty of margin to the external power stage inductor and caps to support increasing the target output current. This change was referenced by product change notice (PCN) 19003 (http://avnet.me/PCN19003). Another PCN is expected in Summer 2020 where we widened the thresholds of the other outputs as well to provide the highest design margin possible within the physical constraints of the design.
In these cases we needed to slightly adjust the output voltage on a rail. Another relatively common thing that could change using early silicon might be supply sequencing, which could also easily be changed. Inrush current issues causing dips on your input supply could be rectified as well by slowing down ramp times. Now realistically the odds of your requirements changing after your design is built are probably pretty low. That being said it is pretty nice to have especially during bring up and validation. If you happen to have one of these platforms and would like to get the update procedure and programming files you can request them here - http://Avnet.me/AvnetProgrammingFiles