Web Excursions 2022-05-06
DALL-E, the Metaverse, and Zero Marginal Content
DALL-E is an example of how imaginative humans and clever systems can work together to make new things, amplifying our creative potential.
That last line may raise some eyebrows:
at first glance DALL-E looks poised to compete with artists and illustrators;
there is another point of view, though, where DALL-E points towards a major missing piece in a metaverse future.
Games have long been on the forefront of technological development
Still, this evolution has had challenges
Social networking has undergone a similar medium evolution as games, with a two-decade delay.
Games may be mostly deterministic, but humans are full of surprises.
Moreover, this means that social networking is much cheaper:
instead of the platform having to generate all of the content,
users generate all of the content themselves
The first iterations of social networking had no particular algorithmic component other than time
Over time the News Feed evolved from a relatively straightforward algorithm to one driven by machine learning
TikTok, of course, is all user-generated content,
but the crucial distinction from Facebook is that you aren’t limited to content from your network:
TikTok pulls in the videos it thinks you specifically are most interested in from across its entire network
What is fascinating about DALL-E is that it points to a future where these three trends can be combined.
In the very long run this points to a metaverse vision that is
much less deterministic than your typical video game, yet
much richer than what is generated on social media.
Machine learning generated content is just the next step beyond TikTok:
instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models
generate new content from content, at zero marginal cost.
This is how the economics of the metaverse will ultimately make sense:
virtual worlds needs virtual content created at virtually zero cost,
fully customizable to the individual.
I’m a Fashion Editor, and I Shop at the Dump
Surveying the Swap Shop’s jumble, I saw infinite possibilities.
Even the most dated clothes seemed ready to spring to life,
like actors of a certain age waiting to be rediscovered by Quentin Tarantino.
I’ve dug out perfectly wearable A.P.C. sweaters and COS shirts, and
a family friend told me about finding a Ferragamo bag with leftover cash inside it.
My father found a fine sweatshirt from a posh private school.
Luxury brands that once destroyed and even burned unsold merchandise are now thinking of ways to reinvent it.
Salvage and resale have become antidotes to the conveyor belt of fast fashion,
wherein clothing behemoths like Shein offer thousands of new styles every week,
social media users display their latest avalanche of purchases in “haul videos” and
Instagram influencers post themselves in new outfits multiple times a day.
Interpreting and using disk performance data
is primarily intended to help users of Blackmagic products
determine whether their storage is capable of the write and read performance required to handle different types of video.
Its analogue speedometer display is fun, but constantly changes during testing,
leaving you guessing what the true transfer rates were.
This makes its results subjective and susceptible to user interpretation.
is backed up by more information, for instance that its default sequential read/write queue depth is 8, test iterations are 5, test size is 1 GiB, test interval of 5 s, and a test duration limit of 5 s.
For the most widely quoted figure of "SEQ1M QD8", it's described as
"reading/writing the specified size file sequentially
with 128 KiB blocks
from the specified number of threads (queue depth)."
It's most commonly used in its default configuration, with 5 test iterations of 1 GiB, for which it "shows the median score".
Although taking the median of five results may appear a wise statistical precaution,
as the user is given no idea of the spread of results it's easy to see how misleading that can be.
A problem common to both these tests is
their reliance on a single transfer size,
typically in the range 1-5 GB, and
the limited number of measurements performed.
Transfer rates in the range 2-100 MB are
often significantly lower, and more relevant to typical use cases, and
some storage becomes significantly slower above 1 GB,
equating to large media file sizes.
the most common cause of anomalously high transfer rates: buffering and caching.
Particularly when dealing with files smaller than 2 MB,
macOS and storage devices go out of their way to improve their performance using fast memory.