Before the installation process can start, the bootable volume needs an owner.
This user doesn’t need to be an admin user as such,
but they should already have access to the encrypted Data volume on the internal storage
so they can be recognised as an owner.
The installer then offers to copy the account settings for that user (as the owner of the external volume).
The straightforward option here is to tick that box, w
hich eventually will create the account as the primary admin user with the same settings.
don’t shut your Mac down and disconnect that disk when your Mac is expecting to boot from it.
Perceived speed is all about latency.
When discussing software performance, we often hear about throughput. But that’s not how users perceive things.
When dragging items on the screen users perceive latencies as low as ~2ms.
The just noticeable latency varies by user and action being performed, but it’s consistently very low.
Another common operation on touch devices is tapping on buttons or links.
Here tests suggests users on average notice latency as it goes beyond ~70ms
(though it’s likely lower for some individual users).
In terms of dragging with a finger,
no current consumer system will consistently meet the low single digit millisecond level needed to satisfy all users.
So all current touchscreen operating systems will leave at least some users feeling like the object they’re dragging is lagging behind their finger.
In any event, it seems likely the latency threshold for typing is below ~100ms for many users, and perhaps well below it.
Input latency of mice varies widely.
Some setups achieve latencies in the single digit milliseconds range
by combining high-performance hardware with careful, low-level programming.
It’s also possible to go beyond 100ms of end-to-latency
with a combination of mediocre hardware and applications that introduce extra delays or buffers between input and display.
Google’s RAIL model.
This model claims that responses within 100ms “feel like the result is immediate” and that
higher latency “[breaks] the connection between action and reaction”.
typical human reaction time from seeing a visual stimulus to taking a physical action is about 220ms.
This value must be significantly more than noticeable latencies,
because reactions involve observing something and then doing something.
Altogether we think this suggests action latencies should be ~100ms or less to avoid user perception of delay.
How do current apps fare against this benchmark?
Some do well. For example, many Unix command line programs run in under 100ms.
Most of the web does poorly
In the case of mobile and desktop, there are some apps that will consistently achieve <100ms latency, such as the built-in calculator on iOS.
But it’s easy to find cases of productivity apps that significantly exceed this threshold even when they have (or should have) all data available locally.
Consider the Slack example below:
keyboards easily take up 10s of milliseconds of latency budget on the very first step in the processing pipeline.
Mice can similarly introduce 10s of milliseconds of latency.
Though the highest performance gaming mice will have latencies in the single digit millisecond range.
we can use a few of the common patterns in input device hardware to understand latencies in these as well as standalone devices.
One common pattern is sample rates.
In many input devices, the hardware “scans” or “samples” for new input on a periodic interval.
For example, typical consumer touch screens sample for input at the rate of 60hz, or once every ~17ms.
This means that in the worst cases input device latency will be at least ~17ms,
and in the average case it can be no better than ~8ms.
Low speed USB scans at 125hz,
introducing an unavoidable ~8ms max and ~4ms average delay.
More recent USB versions scan at 1000hz or more, minimizing the latency impact.
The hardware at the other end of the pipeline are displays and graphics cards.
One source of latency here is the frame rate of the display.
Since displays can’t redraw constantly, this introduces unavoidable latency similar to the input scanning discussed above.
If a screen updates (say) every 20ms, it adds 20ms of latency in the worst case and 10ms in the average case.
Another contribution to latency from displays is the time it takes them to physically change the color of pixels after they receive new pixel data.
This time varies from low single digit milliseconds or less in high-end gaming displays
to double digit milliseconds in less responsive LCDs.
A related issue happens when application code is outright slow,
and doesn’t even send instructions to the GPU fast enough to take full advantage of it.
We’ve discussed at least three parts of the pipeline where latency accrues due to periodic activity: input scanning, GPU rendering loops, and display refresh cycles.
It’s important to note that these can stack in ways that essentially adds all of their latency together:
On the software side, runtime overhead is a catch-all for overhead from the operating system and other non-application code.
two important examples: garbage collection and scheduling.
A GC may delay ~1 frame, not all frames.
But like “jank” from missed frames, latency jitters are noticeable and annoying to users.
There are ways to mitigate GC-induced latency.
These include moving as much GC work as possible off of the main thread and optimizing the GC to require only small individual pauses.
One can also use a language that trades off some of the convenience of GC for more predictable performance.
Another potential source of overhead is operating system scheduling.
Our application (and its dependencies in the OS) are not necessarily running all the time.
Other programs may be scheduled in while ours is paused, even if for a very short time
I explained an idea for a utility that I had been wanting: A desktop program that monitors my clipboard for URLs and logs them automatically.
The students then discussed how they would implement the idea and asked a few clarification questions, before one stated: "This project will only take 2 hours."
I too run into the issue of underestimating the complexity of projects on first glance.
I do it a lot actually.
I've started using it as a thought experiment exercise for project management.
Whenever I think something is extremely simple, I walk through it step by step to uncover the complexities, design decisions, use cases, and potential features that I missed.
[This article goes] through the exercise with the clipboard URL logger.
Is it always logging?
run on startup.
What is in the log file?
What should the log file contain? A raw text file of URLs separated by newlines?
Is there any other information that could be useful? Timestamps
Since reading URLs are difficult and don't convey what is on the other side, you might want the website title and description.
I would also expect to know where the URL was copied from
How is the log file formatted?
A fixed number of lines per entry
Or maybe it should be CSV? Well, URLs could have commas so we would need to be a bit more selective about the character we use as a separator.
How do you know a URL has been copied?
when do you access the clipboard
how do you identify URLs
Or does it require "http" or "https" at the start? Do you do a HTTP GET to check that it exists? What instances will be false positives?
What about privacy concerns?
encrypted or at least obfuscated.
exclude specific URLs
write only or require a password
How is the log file managed?
Will it continuously log to the same file?
Should it create a new file after a certain period of time or after it reaches some size threshold?
Will it ever purge the history?
Can I view or search the log?
At least add a menu item that will open the log in the default text editor.
We could also take it a step further and provide a basic viewer with search
Can I sync to the cloud?
To do that, it probably needs user profiles
Is it ready for real-world use?
Is it easy to download, install, and configure?
Are there any instructions or a demo video?
Can someone learn to use it without explanation?
Has it been thoroughly tested?
What operating systems are supported?
What versions of those operating systems?
Does it play nicely with other software?
When can it fail? When it crashes, does it present a user readable error?
I vaguely remember a rant some famous Linux developer went on about how setting up printing in Linux requires a dozen parameters, half of which are arcane nonsense that the printer manufacturer themselves probably doesn't know how to set up correctly.
Meanwhile, on an Apple or Windows computer it can be literally just plug and print.
The difference is that hiding those arcane input parameters takes work, and a lot of it.
The thing that sets good developers apart is being able to tease out those requirements without being a jerk or making legitimately simple requests overly complicated, then deliver.
In short, our experiment data showed a reduction in dislike attacking behavior 1.
Based on what we learned, we’re moving forward with making the dislike count private across YouTube–this means that the dislike button is staying, but the number of dislikes on a video will only be available to creators in Studio and not visible to the public on the video’s page. This change is gradually rolling out starting today.
You can still dislike videos to further personalize and tune your recommendations.
Bummer. I often use like vs dislike ratios as a gauge on whether a video is worth watching.
More often that not when I see a video with tons of dislike it's because it's been brigaded by an adversarial community and not really an indicator of the value of the content.
In my experience it's more of a "controversial opinion" indicator than a "bad video" indicator.
most of your videos have a fairly inconsequential amount of downvotes,
and those that do are not because they're poor quality but rather
because it's been disliked by people who weren't your target audience.
Shut up and swallow some more ads!
the fact it isn't optional is the biggest red flag, why not just let channels choose if they want to show dislikes?
There is a video on YouTube in which YouTube CEO Susan Wojcicki received the Free Expression Award (which were sponsored by YouTube).
At the time of the check I just did, that video currently has 227 upvotes and over 56,000 downvotes, making it the worst ratio’ed video on YouTube I have ever seen.
This seems one sided. Why not hide the “like” count too and make both like and dislike counts visible only to the creators while using the individual action of each viewer for recommendations?