Web Excursions 2021-10-03
Cloudflare’s Disruption by stratechery.com
Christensen defined new market disruption in The Innovator’s Solution:
The third dimension [is] new value networks.
These constitute either new customers who previously lacked the money or skills to buy and use the product,
or different situations in which a product can be used —
enabled by improvements in simplicity, portability, and product cost
New market disruptors don’t stand still,
but can leverage the huge runway provided by the new market to build up their product capabilities
in a way that eventually threatens the incumbent
When it comes to compute, however, reality is very different than theory.
First, usage may be uneven, whether that be because a business is seasonable, hit-driven, or anything in-between.
compute capacity has to be built out for the worst case scenario, even though that means most resources are sitting idle most of the time.
Second, compute capacity is likely growing — hopefully rapidly, in the case of a new business.
a business has to overbuild for their current needs so that they can accommodate future growth
Third, compute capacity is complex and expensive.
huge fixed costs
significant ongoing marginal costs
[AWS'] prices have, as you might expect, come down over the ensuing 15 years:
A gigabyte of storage today is $0.023, a decrease of 85%
Moving data into S3 is free, a decrease of 100%
Moving a gigabyte out of S3 is $0.09, a decrease of 55%
What is consistent across all of those variables, though, are differences in cost between moving data into AWS, and the cost of moving data out
[CF has published] a blog post from earlier this year called the difference AWS’s “Hotel California”
Prince, where he made the case that based on Cloudflare’s understanding of bandwidth costs,
AWS was making a 7959% margin on US/Canada egress fees;
Prince’s conclusion at the time was that AWS ought to join the Bandwidth Alliance and discount or waive egress fees when sending data to Cloudflare (which doesn’t cost AWS anything, thanks to an industry-standard private network interface),
but two months on, the true point of Prince’s post was clearly this week’s announcement.
[Comparing AWS and CF]
[If I were AWS,] I am constrained by the capacity of the cable; to support more data transfer I would have to install a higher capacity cable, or more of them.
[But] I would be Cloudflare [if] I would charge marginal rates for my actual marginal costs
(storage, and some as-yet-undetermined-but-promised-to-be-lower-than-S3 rate for operations),
and give away my zero marginal cost product for free. S3’s margin is R2’s opportunity.
I would have massive amounts of bandwidth already in place, the use of which has zero marginal costs,
and oh-by-the-way locations close to end users to stick a whole bunch of hard drives.
The reason that Cloudflare can pull this off is the same reason why S3’s margins are so extraordinary:
bandwidth is a fixed cost, not a marginal one.
S3 was the foundation of AWS’s integrated cloud offering, and remains the linchpin of the company’s lock-in;
R2 is a compelling choice for a certain class of applications that could be built to serve a lot of data without much compute.
R2 may be a direct competitor for S3, but that doesn’t mean that anything else about Cloudflare’s cloud ambitions have to be the same.
what if R2, thanks to its explicit rejection of data lock-in, becomes the foundation of an entirely new ecosystem of cloud services that compete with the big three by being modular?
Here it is a benefit to Cloudflare that it is a relatively small company: opportunities that seem trivial to giants will be big wins
it will be very difficult for Amazon to respond:
sure, R2 may lead Amazon to reduce its egress fees,
but given the importance of those fees to both AWS’s margins and its lock-in, it’s hard to see them going away completely.
AWS itself is locked-in to its integrated approach:
the entire service is architected both technically and economically to be an all-encompassing offering; to modularize itself in response to Cloudflare would be suicidal.
Google developing own CPUs for Chromebook laptops
Google is developing its own central processors for its notebook and tablet computers, the latest sign that major tech players see in-house chip development as key to their competitiveness.
The U.S. internet giant plans to roll out the CPUs for laptops and tablets, which run on the company's Chrome operating system, in around 2023
georgyo:
Apple, Google, and Amazon are now creating their own ARM CPUs for their own products.
Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.
xyzzy_plugh:
the reality is proliferation of "custom" (but under license) ARM chips is the end goal of ARM. That's their whole business model.
They (Arm, the company) don't manufacture anything. They just design and license their designs.
klelatti:
All these CPUs will be Arm compatible, otherwise they will be breaking their license.
Maybe they will add accelerators as Apple has done but Arm compatible code will still run on all these CPUs.
they [do have] incentive to be cooperative with the others.because they need Arm code to run on them.