Web Excursions 2021-04-11
🌟 [Post of The Day] How to motivate yourself to change | Psyche Guides
‘Motivational interviewing’ (MI) is a counselling approach developed by the clinical psychologists William R Miller and Stephen Rollnick.
MI practitioners use their counselling skills to evoke what’s called change talk – a conversation about what clients are unhappy about and how they’d like to change.
Through an accepting, collaborative and guiding style, this approach seeks to strengthen the person’s commitment to goals they identify for themselves.
The emphasis is on a person’s own choices and own reasons for change.
Motivation often changes and fluctuates day-to-day, even moment-to-moment
sees motivation as a multifaceted concept that involves not only being willing to change, but being ready and able.
Being willing means that you recognise that something concerns you about your situation
Readiness indicates that you not only recognise a need for change but see this need as a priority amid all the other competing priorities in life.
being able refers to having confidence in your ability to change, and being in possession of the necessary knowledge and skills to make the change.
Expecting to be 100 per cent ready, willing and able isn’t realistic
Four key stages involved when practitioners use motivational interviewing: engagement, focusing, evocation and planning.
Engagement refers to the need for practitioners to build a positive relationship or therapeutic alliance with their client or patient.
Focusing,
which helps the practitioner and client identify what issue or concern in the client’s life will be addressed first
Recognition of a problem is the first step toward building discrepancy – that is, recognising the difference between your reality and the ideal.
First, what is your reality?
Get a notepad and brainstorm what’s causing you dissatisfaction or concerns.
rate the ones you listed on a scale from 1 to 5,
If the concern bothers you several times a day, you might score it a 5. If it causes you concern only once every few weeks, you might score it 1.
focus on your most highly rated concerns, and
think: what would make them better, and why?
put them together – the ideal first, and then the reality: this will help you see your discrepancy for each concern:
(Ideal): My life would be better if I ______ because it would _____.
(Reality): Currently I am _________________.
Next, think about how big or small that discrepancy is.
Ideally, at this stage you can identify a change where the discrepancy is ‘just right’
serious enough to bother you, but not so huge that it’s overwhelming.
Evocation:
to choose the concerning behaviour(s) you most want to work on.
jot down your thoughts under two columns - your list of pros and cons, and think:
why are these outcomes important?
Consider what values you hold, what principles or standards of behaviour make this potential change particularly vital.
How will working toward your change goals help you better live by these values?
another key aspect is to build your confidence in your own ability to make changes to your behaviour, e.g., by
Identify your strengths.
Identify your past successes.
Develop hope and inspiration.
Planning
Think of the ‘big picture’ first
Next, zoom back in to develop and refine your specific goal for change.
aim to translate your aims into a SMART goal, that is:
be specific, make the goal measurable, attainable, relevant and time-bound
brainstorm possible steps you can take toward achieving the goal.
Try listing at least 10 actions, steps or tasks that will help you make progress
A support system
other resources
If you don’t have the financial resources available,
is there anything you could do to save or raise the necessary funds?
Or could you find creative ways to utilise or access resources in your community
Setting up a system of reward
Put all this information – the big picture; your specific goal; 10 specific steps; your support system; your resources; your obstacles – together in a written plan, and review it often.
Precontemplation
is typically used to describe an individual who has no intention of adopting a new behaviour in the next six months.
Precontemplators can be divided into two broad categories: the uninformed and the demoralised
It’s essential to also develop a plan to handle sliding back into old patterns, otherwise known as relapse.
Should you relapse, motivational interviewing can help – revisit the earlier focusing and evocation exercises to look again at your reasons, desires or need to change.
Your backup plan could also include reviewing your reward system, continuing to build your social support system, and re-evaluating potential barriers to change.
Meet the patent troll that won a $308 million jury trial against Apple
A few weeks ago, a company you’ve probably never heard of, called Personalized Media Communications (PMC), won a $300 million patent verdict against Apple.
PMC is what patent lawyers call a “non-practicing entity,” or NPE.
as of November, they were in licensing discussions with more big companies, like Walmart and Disney.
There is another set of documents that reflects directly on this case: transcripts from the PMC v. Google
PMC’s case against Google resolved very differently than the case against Apple. PMC lawyers wanted Google to pay as much as $183 million in damages, saying that YouTube infringed four PMC patents.
But the verdict was a clean sweep for the defense: the jury found that Google didn’t infringe any patents, and PMC walked away empty-handed.
A Family Business
The job of a non-practicing plaintiff is to normalize their business.
To the extent that Harvey and Cuddihy ever had an idea for an actual product, it seems to have been a kind of computerized graphical overlay on top of a TV screen.
PMC never made any for sale. And as the CEO admitted, they didn’t even spend on R&D.
The Submarines Surface
PMC claimed a patent on a “remote intermediate transmitter station” covered YouTube’s system of caching videos at Edge Nodes.
PMC told the jury: Following the steps of the methods in the '528 patent, YouTube knows how to skip over the missing frames and the incomplete frames and go to the next complete one.
Subramanian also claimed
that YouTube’s system of showing thumbnails infringed a PMC patent related to a “multimedia presentation,” and
that PMC’s U.S. Patent No. 7,769,344 described YouTube’s DRM system.
It used to be possible to file so-called “submarine patents,”
in which the application can be filed and then argued over (prosecuted) at the U.S. Patent and Trademark Office for an extremely long time.
Then the clock starts ticking towards expiration only after the USPTO grants the patent.
This particular method of manipulating the patent system was banned in 1995, when the USPTO changed how it calculates patent terms.
In June 1995, one day before the new law went into effect, Scott filed more than 300 new patent applications—all linked to the original 1981 patent.
Harvey and Scott were one of the earliest, boldest proponents of pure patent licensing as a business model.
“They exist to exploit the patents”
Google didn’t even try to invalidate the patents at trial.
Instead, its lawyers focused on the non-infringement—emphasizing that these old patents were simply not relevant to YouTube.
“These patents are old 1981 inventions, and they don't apply to the sophisticated internet we have today,” Google lawyer Charles Verhoeven told the jury during his opening.
Google met with PMC principals at least twice, in 2011 and 2015, to discuss buying its patents.
In 2011, Holtzman, who passed away in 2018, along with PMC licensing agent Boyd Lemna, made a presentation to Google urging the search giant to purchase PMC’s patents—to sue Apple.
Apple is a particularly unsympathetic victim.
it’s one of the richest companies on earth,
Apple has such a long history of leveraging intellectual property, and particularly DRM, in ways that are anti-competitive and bad for society.
An “especial and delicious irony” in Apple losing so big in a trial that’s purportedly over, in part, who invented DRM.
Setting up Starlink, SpaceX's Satellite Internet | Jeff Geerling
Price
$500 for the equipment, plus
$25 for a Volcano Roof Mount, and
$99 for the first month of service
My cousin Annie, who lives in Jonesburg, MO, currently pays for the maximum available DSL plan to her farm and gets a measly 5 Mbps down, and 0.46 Mbps up—on a good day
There are some challenges and potential pitfalls associated with the building of an entire constellation of satellites so close to earth,
SpaceX is supposedly working with the space and astonomy communities to try to find solutions that prevent risk (e.g. Kessler Syndrome) and keep the skies clear for astonomy
Equipment
The router design is as impractical as it is futuristic.
The thing would fall over if you looked at it sideways, and
the solitary LED on the front was hard to see unless in a dark room or looking closely
The cables and PoE injector/switch provided feel rugged and better put-together
The router runs OpenWRT, though it exposes precious few options to end-users.
After you plug in the gear, Dishy (a common nickname for the Starlink satellite dish) points straight up
then after finding a satellite, it aligns itself to a slight Northern inclination, so it can get the best signal.
Inside Dishy is a flat PCB with an array of beam-forming antennas and a network SoC that controls everything about the connection.
It's powered through PoE++ (using around 100W of power continuously), and
has two motors inside to control tilt+rotation.
Limitation
Starlink needed a 100° view of the northern sky
Starlink dishes are assigned a cell for coverage, but I didn't know how big the cell was. Apparently it's smaller than 60 miles.
Performance
I am seeing the following 24h average metrics:
Download: 106 Mbps
Upload: 16.1 Mbps
Ping: 40.58 ms
These numbers are well within expectations, and I think the connection's been perfectly adequate.
There are only infrequent total dropouts right now (to be expected, IMO), and they usually last less than 30 seconds.
HN
freedomben
East Idaho.
Currently my dish angles itself to the north.
It rarely moves itself north/south, and slightly moves east/west throughout the day.
I've read that right now it locks onto a single satellite, although they're adding multi-satellite support later.
My speeds are inconsistent, and interestingly
they start slow (around 60 Mbps)
but after a couple seconds they'll get to 150-200 Mbps (which is awesome for downloads).
Latency is consistently in the low 30ms.
I get some downtime every day
Setup
literally take dish out of the box,
insert into tripod (included),
plug in cables,
connect to the wireless routers SSID and
activate with the starlink app.
After that I put the included router into storage and plugged in my Protectli running CentOS.
DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs.
This tool enables the use of Geforce and Quadro GPUs with the NVIDIA vGPU software.
NVIDIA vGPU normally only supports a few Tesla GPUs but since some Geforce and Quadro GPUs share the same physical chip as the Tesla this is only a software limitation for those GPUs.
This tool aims to remove this limitation.
This script will only work if there exists a vGPU compatible Tesla GPU that uses the same physical chip as the actual GPU being used.
How it works
In order to determine if a certain GPU supports the vGPU functionality the driver looks at the PCI device ID.
This identifier together with the PCI vendor ID is unique for each type of PCI device.
In order to enable vGPU support we need to tell the driver that the PCI device ID of the installed GPU is one of the device IDs used by a vGPU capable GPU.
How it all comes together
After boot the nvidia-vgpud service queries the kernel for all installed GPUs and checks for vGPU capability.
This call is intercepted by the vgpu_unlock python script and the GPU is made vGPU capable.
If a vGPU capable GPU is found then nvidia-vgpu creates an MDEV device and the /sys/class/mdev_bus directory is created by the system.
vGPU devices can now be created by echoing UUIDs into the
create
files in the mdev bus representation.This will create additional structures representing the new vGPU device on the MDEV bus.
These devices can then be assigned to VMs, and when the VM starts it will open the MDEV device.
This causes nvidia-vgpu-mgr to start communicating with the kernel using ioctl.
Again these calls are intercepted by the vgpu_unlock python script and when nvidia-vgpu-mgr asks if the GPU is vGPU capable the answer is changed to yes.
After that check it attempts to initialize the vGPU device instance.
Initialization of the vGPU device is handled by the kernel module and it performs its own check for vGPU capability, this one is a bit more complicated.
The kernel module maps the physical PCI address range 0xf0000000-0xf1000000 into its virtual address space,
it then performs some magical operations which we don't really know what they do.
What we do know is that after these operations it accesses a 128 bit value at physical address 0xf0029624, which we call the magic value.
The kernel module also accessses a 128 bit value at physical address 0xf0029634, which we call the key value.
The kernel module then has a couple of lookup tables for the magic value, one for vGPU capable GPUs and one for the others.
So the kernel module looks for the magic value in both of these lookup tables,
and if it is found that table entry also contains a set of AES-128 encrypted data blocks and a HMAC-SHA256 signature.
The signature is then validated by using the key value mentioned earlier to calculate the HMAC-SHA256 signature over the encrypted data blocks.
If the signature is correct, then the blocks are decrypted using AES-128 and the same key.
Inside of the decrypted data is once again the PCI device ID.
So in order for the kernel module to accept the GPU as vGPU capable
the magic value will have to be in the table of vGPU capable magic values,
the key has to generate a valid HMAC-SHA256 signature and
the AES-128 decrypted data blocks has to contain a vGPU capable PCI device ID.
If any of these checks fail, then the error code 0x56 "Call not supported" is returned.
In order to make these checks pass the hooks in vgpu_unlock_hooks.c will
look for a ioremap call that maps the physical address range that contain the magic and key values,
recalculate the addresses of those values into the virtual address space of the kernel module,
monitor memcpy operations reading at those addresses, and
if such an operation occurs,
keep a copy of the value until both are known,
locate the lookup tables in the .rodata section of nv-kernel.o,
find the signature and data bocks,
validate the signature,
decrypt the blocks,
edit the PCI device ID in the decrypted data,
reencrypt the blocks,
regenerate the signature and insert the magic, blocks and
signature into the table of vGPU capable magic values.
And that's what they do.
Covid Closed Theaters. But It Also Made Them Accessible.
Many people have become allergic to Zoom as a result of overuse, but as a tool, Zoom and its ilk are able to control what the viewer sees in a way different from typical stagecraft
Though “Three Kings” and “Present Laughter” each star Andrew Scott, the cameras’ close-ups showed off his face in a way that I was unable to witness from the cheap seats.
the actors involved, who can otherwise feel Hollywood-larger-than-life, aren’t exempt from muddling through our international tragedy
Time Machine to APFS: How efficient are backups?
When Time Machine backs up to HFS+ (TMH), it both uses the features of that file system to its advantage with directory hard links, and suffers from its limitations.
The latter include lack of support for sparse files, and limiting copying to whole files rather than changed storage blocks.
The result is that
APFS sparse files have to be expanded to their full size, and
APFS clones have to be expanded into whole files too,
making HFS+ inefficient as a file system for hosting backups of APFS volumes.
To test this, I created three files on an APFS volume being backed up to an APFS volume, under macOS 11.2.3.
Two were sparse files, whose expanded size was 5 GB each, and the third was a duplicate of one of those.
According to the Finder, each of those three files was 5 GB in size, but each actually occupied just a few KB on disk
It’s likely that the Files Copied item includes all files which are copied to the backup, excluding clones,
with its
l
value reflecting the total expanded size, andthe
p
value the size actually copied and space taken on the backup volume.Similarly for the Files Cloned item, for APFS clone files.
This confirms that TMA is as efficient as possible in both the copying and storage of APFS sparse files and clones.
This is far superior to TMH, which would of course have had to copy across almost 15 GB extra data, and required a total of 15 GB space in the backups for these three files.
With sparse files and clones being relatively common in APFS volumes, the efficiency of TMA can make a big difference to the time taken to make backups, and use of storage space on the backup volume.
How the Supreme Court saved the software industry from API copyrights
Last October, justices for the nation's highest court seemed skeptical as well.
Not only were they asking Google's lawyer, Tom Goldstein, a lot of tough questions, a number of them didn't seem to even understand what an API was.
The high court was actually considering two different questions in the case.
APIs can't be copyrighted, Google also argued that its
use of Oracle's Java API was legal under copyright's fair use doctrine.
The Supreme Court decided to skip over the first question and focus on the second one.
"Given the rapidly changing technological, economic, and business-related circumstances, we believe we should not answer more than is necessary to resolve the parties’ dispute," Justice Stephen Breyer wrote in his majority opinion.
"We shall assume, but purely for argument’s sake, that the entire Sun Java API falls within the definition of that which can be copyrighted."
Courts consider four major criteria when decided whether a use is fair.
The party that wins on a majority of these factors usually wins the case.
Justice Breyer concluded that all four of these factors point in Google's direction.
nature of the original copyrighted work.
In Justice Breyer's view, the code that defines an API is more like a dictionary than a novel.
the purpose and character of the copying
The high court pointed to two common reasons people copy APIs:
the desire to ensure interoperability between software products and
the desire to enable programmers who learned skills on one platform to re-use those skills elsewhere.
how much material was copied.
11,500 lines - a tiny fraction of the 2.8 million lines that make up Oracle's official Java implementation.
the effect of the copying on the market for the original work.
Sun had struggled to gain traction in the mobile phone market in the years before Google launched Android.
It's not surprising that Justice Breyer wound up writing the majority opinion.
As the court's most senior associate justice, he gets first pick (after the chief justice) of which opinions to write.
He is the court's leading copyright scholar, having written a treatise on copyright law way back in 1970.
In last October's oral argument, Justice Breyer was the most articulate defender of Google's position—sometimes explaining it more clearly than Google's own lawyer
Beyond the specifics of Breyer's fair use analysis, the really important thing about Breyer's ruling is that it clearly articulated how code that defines an API is different from an ordinary computer program and why this difference is important.
Justice Breyer repeated his QWERTY analogy in Monday's ruling and compared an API to a gas pedal.
Breyer's opinion focused a lot on the role of programmers in the Java ecosystem
Breyer argued there was no particular reason why the fruits of those investments should belong to Oracle rather than to the programmers themselves.
This style of reasoning—thinking about the economic and social consequences of legal restrictions—is more familiar to most judges than technical questions like the difference between declaring and implementing code.
By focusing on fair use rather than the more fundamental question of whether an API can be copyrighted at all, Breyer may have provided judges with a roadmap for future cases that they will find it easier to understand and follow.
That could ultimately lead to a more coherent body of law since it doesn't depend on judges understanding what an API is.
a significant downside to the focus on fair use rather than copyright eligibility: it might take longer for defendants in API copyright cases to put an end to lawsuits.
If an API can't be copyrighted at all, then defendants can win a motion to dismiss, one of the earliest steps in the litigation process.
In contrast, defendants often won't be able to raise a fair use defense until the later summary judgment stage, which means defendants could face higher legal bills.
If the new product too closely duplicates the functionality of the original product, a plaintiff could argue that the use is not as transformative as Google's use of Java in Android.
While the court didn't say that APIs can't be copyrighted, it also didn't endorse the Federal Circuit's view that APIs can be copyrighted.
A 1996 appellate court ruling that held that Lotus couldn't copyright the organization of the menu hierarchy in its then-popular Lotus 1-2-3 spreadsheet software
hint to other courts that Breyer not only considers Lotus to be good law, but he also sees a parallel between Lotus's menu hierarchy and modern APIs.
Spillover effects for other fair use cases, even beyond software.