15
Jul 16

For a lot of years now computer systems with a lot of processors, each often with their own cache, have used a common cache coherence protocol to allow all of them to access their shared memory and the contents of each others cache. Esoteric enough for you? What this has meant, though, is that all of the processors in such systems are essentially identical processors. High capacity systems can be built by having a lot of such homogenous processors, but it happens that there tends to be a limit in how large these system can get. More recently, specialized compute accelerators have been showing up on the market, with these producing outstanding performance in specialized situations, but being different, they have not been able to access the system's memory in the same way as the homogenous SMP's processors. Wanting the best of both words, a consortium of companies have formed the CCIX with the intent of creating still faster heterogeneous systems of both generic processors and accelerators. Outlining what they are up against is largely the purpose of two articles I've recently had published on TheNextPlatform ....

  1. Drilling Into The CCIX Coherence Standard ... (July 13, 2016)
  2. Weaving Accelerators Into The Memory Complex ... (July 14, 2016)

Enjoy.

21
Apr 16

After having published on HPE's "The Machine" in a preceding set of articles, I was provided the opportunity to follow some of the work being done by Dhruva Chakrabarti and his team at Hewlett-Packard Labs in the development of a programming model in support of persistent memory. This resulted in a two-part series attempting to help describe such a persistent-memory programming model. These can be found here:

  1. Programming For Persistent Memory Takes Persistence (April 21, 2016)
  2. First Steps In The Program Model For Persistent Memory (April 25, 2016)

Enjoy.

11
Nov 15

I am today again published on the newly renamed technical web site The Next Platform, this time with an article on Transactional Memory with paired articles titled

Enjoy.

30
Sep 15

I have again had an article published in The Platform. This one, on Processor Virtualization, is split into two and can be found here:

Enjoy.

3
Sep 15

I have again had the pleasure of having an article published by the web site The Platform, this time in two parts. As the article says at the start, this is written as part of my own study as to what the term "In-Memory Computing" was really all about. I hope you enjoy it. The articles can be found here:

30
May 15

I've contended for some time that one of the key concepts missing from Computer Science education are the concepts associated with addressing. In fact, the first language that most new students are taught is Java, a language which goes out of its way to avoid programmers from using an address. So, I've wanted to write something that provides a rapid overview of addressing and I think I've found a readable way of explaining it; IBM's Power CAPI is special because it provides an I/O device to use the same addressing as is used in the typical program. How it does it and how addressing actually works is the purpose of this page called Power CAPI’s Secret is Addressing.

And, once again, on June 22, 2015, I have had the pleasure of having this article published on the web site The Platform. See Addressing Is The Secret Of Power8 CAPI

Enjoy

19
Apr 15

Being interested in variable memory access latencies - some pretty esoteric stuff, right? - from my work with NUMA-based topologies, I was intrigued when I first read about the Knight's Landing processor with its Near Memory (a.k.a., On-Package Memory). Not much was written on it at the time, so I decided to write - mostly for myself - on what it would need to be given the scraps of real information out there. The paper called Thoughts and Conjecture on Near Memory is the result.

As of April 29, 2015, I am also very proud to announce that this article can also be found as part of the The Platform web site here: http://www.nextplatform.com/2015/04/28/thoughts-and-conjecture-on-knights-landing-near-memory/.

Enjoy.

11
Apr 15

Having volunteered at RCTC's (Rochester Community and Technical College) in their Math Learning Center, and having been allowed to work with both the students and some very skilled volunteers, it began to dawn on me that we had something very special here. We were seeing students who would otherwise be failing or just getting by instead being very successful. It struck me, and then others, that something along these lines might be just the push our education system needed.

So the page linked here - Study Hall - is an argument that we need to start some sort of a process that matches up retired but skilled individuals with the students who need them, no matter the subject, no matter the grade.

Enjoy.

10
Apr 15

I've been intrigued by a relatively new notion called a Burst Buffer. It seems to be used most in discussions relating to HPC (High Performance Computing) and near future really large systems called Exascale. The concept did not seem to difficult and did not seem to be described in clear enough terms. So I thought I'd see whether I might do better. My attempt can be found here: The What and Why of Burst Buffers

And, as of 5/19/2015, I have again had the honor of having an article of mine published on The Platform. You can find this same article here: http://www.nextplatform.com/2015/05/19/the-what-and-why-of-burst-buffers/

Enjoy.