Yet More Bits and Bytes

Again I must offer apologies for the lack of real content. It has been an exciting, busy period. Many of the pieces I’ve started over the past few months still sit in a draft state.

As one change, I recently updated my work laptop to a Lenovo Yoga 720. I don’t normally post about random hardware, and certainly have never mentioned the procession of laptops that preceded this one, but I simply love this laptop. I’ve always viewed my laptops as a poor alternative to my desktop when the situation absolutely demands, but this is the first laptop that I actually want to use (and the first that offers a battery life remotely approximating its claims). Outrageously fast 1TB SSD (1.5GB/s at times, and of course ridiculous random access times), i7, gorgeous 4K screen, and work-day long battery life. Most importantly, given a lot of recent work I’ve been engaged in, it has a GTX 1050, allowing me to pound out some convolution neural networks on it. 2GB of GPU ram limits the scope of the network, but still offers extraordinarily opportunities to hash out a lot of solutions that I can then move to the less portable Titan V. I don’t even use the convertible or pen functionality, and very seldom use the touch screen, though they came along for the ride.

The one real weakness of the 720 is that its Thunderbolt 3 port (e.g. through the USB C connector) only has 2 PCI-e lanes, or about 16 Gbps of bandwidth if my recall is correct. I contemplated putting the Titan V in an external enclosure and tasks that are heavily bandwidth bound could be limited by this. For gaming this restriction is unlikely to be relevant, but it could come into play for workloads that need to constantly move working data to and from the GPU. I do plan on benchmarking it at some point, as having the GPU external from my desktop is ideal for both flexibility and heat dissipation.

The Titan V is simply outrageously powerful, on that topic. Whether double, single, or half precision, it is incredible for classic networks and scientific or financial uses, but add in the Tensor cores and it reaches extraordinary heights. An incredible processing value.

To continue this diversion, it is astonishing how absolutely AMD has screwed up their opportunities in the deep learning and even scientific community. OpenCL is an afterthought, and everything, it seems, is built only for CUDA. They tried a final minute hail mary with HIP, but clearly gave it too little resources to really make it credible. It will be interesting to see how they try to recover — competition would be good for everyone — but as is it’s an nvidia world.

Other than that, I solemnly promise to get a couple of technical pieces published shortly. Swearsies. And they’ll be great.