The 2020 Designcon conference included many talks and exhibits with a storage and memory focus. Both Rambus and Teledyne LeCroy had tutorials on design and connectivity for leading edge electronic components and systems as well as testing memory systems. This piece will look at some material from the tutorials and exhibits that can inform us about disaggregated processing developments, high speed chip to chip interfaces and memory for AI applications.
News
Hardware Attack Surface Widening
An expanding attack surface in hardware, coupled with increasing complexity inside and outside of chips, is making it far more difficult to secure systems against a variety of new and existing types of attacks. Security experts have been warning about the growing threat for some time, but it is being made worse by the need to gather data from more places and to process it with AI/ML/DL. So even though efforts are beginning to solidify around secure methodologies and technologies, they are not keeping pace with the growth in data and advancing technology that can turn that data into valuable information.
Securing our IoT future
The holiday season brought with it a surge of new IoT devices, from smart toys and doorbells to automatic pet feeders – and it doesn’t stop there. According to IDC, investment in IoT is predicted to top $1 trillion in 2020. As our homes, businesses and cities become more connected than ever before, this number will only continue to rise. However, whilst the desire and demand for all-things IoT has taken centre-stage, it presents numerous challenges to security. If we want the connected age deliver on its promised benefits, security must take front and centre.
Storage and Networking Bytes: PCIe5, OpenShift, and Veeam
Let’s start with PCIe5, the spec for which was finalized in early 2019. Now manufacturers are now getting revved up to produce PCIe5 hardware in 2020, which will be a boon for data- and processor-hungry workloads like machine learning and AI, as well as high performance computing (HPC) workloads that rely on GPUS, FPGAs, and ASICS to process data.
Accelerating AI And ML Applications With PCIe 5
The rapid adoption of sophisticated artificial intelligence/machine learning (AI/ML) applications and the shift to cloud-based workloads has significantly increased network traffic in recent years. Historically, the intensive use of virtualization ensured that server compute capacity adequately met the need of heavy workloads. This was achieved by dividing or partitioning a single (physical) server into multiple virtual servers to intelligently extend and optimize utilization. However, this paradigm can no longer keep up with the AI/ML applications and cloud-based workloads that are quickly outpacing server compute capacity.
What Makes Secure Processors Different?
Given the magnificent complexity of modern microprocessors, it’s inevitable that they’ll have bugs and security holes. It might even be physically impossible to create a bug-free CPU, but that’s a mathematics/physics/EDA/statistics/philosophical conundrum that’s above my paygrade. For now, we finds the bugs and we works around ’em.
