David Gessel

Goodbye, Tortuga.

Thursday, April 25, 2024

On April 21st, 2024, at 20:39, Tortuga was gently put to rest after a three year-long struggle with what was probably cancer and a short-lived victory over a mycoplasma bacterial blood infection.

She first took advantage of our yard-cat support program in 2009 as a juvenile cat and passed at about 15 or 16 years of age. She lived a good life, had 5 kittens on March 27th, 2010 that were all weaned and adopted out successfully, and grew old never wanting for food, shelter, or comfort, and never suffering any meaningful illness or injury until her last year.

Over the years, she was the beneficiary of a very strong community support network that took her in whenever she needed it and gave her loving care. She had housemates, human and feline, and a few canine over the years and was always gracious and pleasant, if not always enthusiastic about the four-legged companions.

I am eternally, deeply grateful to everyone who helped her over the years and who made my work and travel possible and Tortuga’s life pleasant and comfortable in my absence, especially in her later years as she needed more care.

She was the best, sweetest cat I’ve ever known. She was always polite, always pleasant, and never scratched or bit, not even when startled or annoyed by dogs. She never broke things or pushed things over or made a mess.

She wasn’t a big fan of other cats, and only a select few were tolerated as guests in her garden. She wanted to start every morning by marking her territory and she ruled her garden with a fierceness that vastly exceeded her tiny size. She started there, spent her last day in the sun there, and will spend eternity there.

Almost every night I was home, she slept in my bed with me. Almost every day I was working at home, she would hop up and sleep quietly between my keyboard and monitor on her little bed there. She didn’t meow much or fuss but purred easily and happily.

In later years, she’d sometimes wake me just before light by prodding my back or nipping to ask for pets; after 5 or 10 minutes of purring and being petted, she would settle back to sleep. It was a ritual that I came to very much enjoy.

Whenever I came home from my travels, no matter how late, as I opened the door into the living space, I’d hear her stir, jumping down from the attic maybe or from my bed or the window perch upstairs and tap-tap-tap down the stairs and trot up to greet me, rubbing my leg and purring. She’d let me scoop her up and snuggle her, though she wasn’t normally a carry cat, and then walk circles around me for 10 or 15 minutes, welcoming me home in the sweetest way possible. She came to know my departures too and always gave me a look of disappointment, sometimes refusing to come to the door to see me off, but usually relenting for one last scritch on the head.

When Corona hit in 2020, I was in Iraq after leaving her in January of 2020 thinking I’d be back in the spring. I couldn’t make it home for almost two years. The longest I’d been away before then was less than 6 months and even that only once or twice. She’s a cat, and by then an old cat, so I didn’t expect much, but in January of 2022, I opened the door late at night to the sound of her tap-tap-tapping down the stairs to greet me.

She was laid to rest in the garden she ruled for 15 years.

I went through the thousands of pictures I’ve taken of her and others have shared with me and tried to find a few from every year from her first foray in 2009 until her last day. If you knew her at some point during this time, I hope this brings back fond memories of a very special kitty.

2009: Tortuga finds food, takes over a house, and becomes part of the family.

2010 Tortuga has kittens and settles into her role as queen of the garden.

2011 Tortuga takes ownership of my desk.

2012

2013

2014

2015

2016

2017

2018

2019

2020 Corona time.

2021 Corona time.

I didn’t get to see Tortuga at all from January of 2020 until January of 2022.

2022 Reunited.

2023

A typical welcome home when I’d been away too long.

2024 The queen of the garden forever.

Posted at 08:35:14 GMT-0700

Category: Cats

A one page home/new tab page with random pictures, time, and weather

Thursday, April 11, 2024

Are you annoyed by a trend in browsers to default to an annoying advertising page with new tabs? I sure am. And they don’t make it easy to change that. I thought, rather than a blank new tab page, why not load something cute and local. I enlisted claude.ai to help expedite the code and got something I like.

myHomePage screenshot

myHomePage.html is a very simple default page that loads a random image from a folder as a background, overlays the current local time in the one correct time format with seconds, live update, and throws up the local weather from wttr.in after a delay (to avoid hitting the server unnecessarily if you’re not going to keep the tab blank long enough to see the weather).

Images have to be in a local folder and in a predictable naming structure, as written “image_001.webp” to “image_999.webp.” If the random enumerator chooses an image name that doesn’t exist, you get a blank page.

Browsers don’t auto-rotate by exif (or webp) metadata, so orient all images in the folder as you’d like them to appear.

The weather information is only “current” which isn’t all that useful to me, I’d like tomorrows weather, but that’s not quite possible with the one-liner format yet.

Update, I added some code to display today and tomorrow’s events and current todos meeting specific filter tests from your Thunderbird calendar, if you have it. If not, just don’t cron the bash script and they won’t show. I also changed the mechanism of updating the weather to a 30 minute refresh of the page itself, this way you get more pix AND the calendar data updates every 30 minutes. Web browsers and javascript are pretty isolated from the host device, you can’t even read a local file in most (let alone write one). All good security, but a problem if you want data from your host computer in a web page without running a local server to deliver it.

My work around was to write the data into the file itself with a script. Since the data being written is multi-line, I opted to tag the span for insert with non-breaking spaces, a weird character and the script sanatizes the input from calendar events extracted from the sqlite database in case some event title includes them. The current config is by default:

~/.myHomePage/myHomePage.html
~/.myHomePage/getEvents.pl
~/.myHomePage/getToDos.pl
~/.myHomePage/putEvents.py
~/.myHomePage/putToDos.py
~/.myHomePage/myHomeImages/image_001.webp
~/.myHomePage/myHomeImages/image_002.webp
etc.

How to set the homepage and new tab default page varies by browser. In Brave try hamburger→settings→appearance→show home button→select option→paste the location of the homepage.html file, e.g. file:///home/(username)/.myHomePage/myHomePage.html

Then just set a cron script like */30 * * * * /home//.myHomePage/getEvents.pl for regular updates: script all the subroutines that are useful or write a little bash script to do them in sequence and call that with your favorite periodic method.

Parsing recurring events in perl is a challenge and I managed to get claude to ragequit, that’s got to be a some sort of a record:

claude rage quits

So the parsing scripts are in python using icalendar, I put them on gitlab at https://gitlab.com/gessel/myhomepage to make it a little easier to mess with, if anyone wants to.

Posted at 05:48:19 GMT-0700

Category: CodeHowToLinuxTechnologyWeather

Putting ccache on a backed RAM disk to speed compiles

Saturday, March 16, 2024

Why do this

Compiling and building ports can be meaningfully accelerated by caching (ccache) certain intermediate results and by moving work directories from slower media to faster (tmpfs /tmp). If you do regular builds, such as one might on a poudriere server, there can be a meaningful write workload to the working directory which uses up SSD life, possibly meaningfully (though probably not really that much if your SSD is modern and big).

If you have a fast, high endurance SSD, putting ccache on it won’t do much. If ccache is going on rotating media, this config will speed up builds appreciably. The save/restore code below will preserve the ccache across reboots and leaves file management inside the ccache directory to ccache itself, while managing the persistence of any other random files that get written outside the directory.

Note this code, different than other examples I’ve found, works with FreeBSD as a service and doesn’t flush files that are accessed (read) between reboots, only files that aren’t touched in any way and therefore can (probably) be evicted without penalty and prevents cruft and clutter on the RAM disk accumulating because it has been made non-volatile.

Putting the workdirectory into a RAM-based tmpfs should speed up builds even compared to a fast SSD, as SSD write times aren’t a strong feature of SSDs. There’s no persistence code as there’s no expectation that the work directories will persist.

Setup ccache

Setting up ccache is pretty easy. First, install it from ports. If you’re using binary packages, you obviously don’t need ccache.

cd /usr/ports/devel/ccache
make install clean

make.conf

Append a few lines to your make.conf files like so:

nano /etc/make.conf
nano /usr/local/etc/poudriere.d/FBSD_14-0-R-make.conf

Add the following:

CCACHE_DIR=/ram/ccache
WITH_CCACHE_BUILD=yes
# WRKDIRPREFIX="/tmp/ports"

note that WRKDIRPREFIX (to use tmpfs /tmp, see below) seems to conflict with the same directive in poudriere.conf so comment out for poudriere hosts or don’t use the option in poudriere.

poudriere.conf

nano /usr/local/etc/poudriere.conf

CCACHE_DIR=/ram/ccache

/etc/fstab

Next, make the /ram directory and set a limit of how much RAM it can use. 12884901888 is 12GB. Somewhere between 8 and 16GB is probably sufficient for most needs. After a few builds, I was using 1.3GB.

mkdir /ram
nano /etc/fstab
none /ram tmpfs rw,size=12884901888 0 0
mount /ram

ccache.conf

nano /root/.ccache/ccache.conf
nano /usr/local/etc/ccache.conf
cache_dir = /ram/ccache
max_size = 12G

status

Now all ports you build will be compiled entirely in RAM. You can check your ccache usage with:

ccache -s

CREATE A CACHE STORE/RESTORE SCRIPT

From: https://forums.gentoo.org/viewtopic-t-838198-start-0.html

This is in /etc/rc.d and should be executed on startup and shutdown, but only actual shutdown not reboot or halt. The correct command to reboot (and preserve /ram) is (you do not need to do this now!):

shutdown -r now

Don’t reboot now, just know that using “reboot” or some other command other than calling shutdown will not call the stop script and won’t sync the cache to NV storage.

Create /etc/rc.d/syncram something like:

#!/bin/sh -
# PROVIDE: syncram
# REQUIRE: FILESYSTEMS
# KEYWORD: nojail shutdown

. /etc/rc.subr

name="syncram"
rcvar="syncram_enable"
desc="rsync ram disk from/to var on startup/shutdown"
stop_cmd="${name}_stop"
start_cmd="${name}_start"

syncram_start()
{
# rsync data from persistent storage to ram disk on boot
# preserving all file attributes
logger syncram-start
/usr/local/bin/rsync -a -A -X -U -H -x \
/var/tmp/syncram/ /ram \
> /dev/null 2>/var/log/syncram-store.log
touch /var/tmp/syncram/.lastsync
}

syncram_stop()
{
# rsync data from ramdisk to persistent storage on shutdown
# preserving all file atributes
logger syncram-stop
#!/bin/sh
# if the dest dir doesn't exist, create it
if [ ! -d /var/tmp/syncram ]; then
mkdir /var/tmp/syncram
fi
# flush any accumulated cruft that weren't accessed since the last sync
# note tmpfs records accurate atime
if [ -f /ram/.lastsync ]; then
find /ram -type f ! -neweram /var/tmp/syncram/.lastsync -delete
fi
# rsync new or accessed removing unused from target
/usr/local/bin/rsync -a -A -X -U -H -x -del \
/ram/ /var/tmp/syncram \
> /dev/null 2>/var/log/syncram-restore.log
}

load_rc_config $name
run_rc_command "$1"
chmod +x syncram

Then edit /etc/rc.conf to include

syncram_enable="YES"

and execute

service syncram onestart

Bonus: tmpfs for working builds

tmpfs can also be used to create a similar ramdisk at the /tmp mount point where it is fairly automatically used by poudriere to speed up builds. There’s a quirk that seems to be a problem (not fully debugged, but the config described here works and survives reboots): putting the WRKDIRPREFIX in make.conf AND in poudriere.conf seems to yield “workdirectory” errors so pick one for the directive and probably pick poudriere.conf if you’re running poudriere.

poudriere.conf

nano /usr/local/etc/poudriere.conf
# Use tmpfs(5)
# This can be a space-separated list of options:
# wrkdir - Use tmpfs(5) for port building WRKDIRPREFIX
# data - Use tmpfs(5) for poudriere cache/temp build data
# localbase - Use tmpfs(5) for LOCALBASE (installing ports for packaging/testing)
# all - Run the entire build in memory, including builder jails.
# yes - Enables tmpfs(5) for wrkdir and data
# no - Disable use of tmpfs(5)
# EXAMPLE: USE_TMPFS="wrkdir data"
USE_TMPFS=yes

# How much memory to limit tmpfs size to for *each builder* in GiB
# (default: none)
#TMPFS_LIMIT=4

# List of package globs that are not allowed to use tmpfs for their WRKDIR
# Note that you *must* set TMPFS_BLACKLIST_TMPDIR
# EXAMPLE: TMPFS_BLACKLIST="rust"
TMPFS_BLACKLIST="rust"

# The host path where tmpfs-blacklisted packages can be built in.
# A temporary directory will be generated here and be null-mounted as the
# WRKDIR for any packages listed in TMPFS_BLACKLIST.
# EXAMPLE: TMPFS_BLACKLIST_TMPDIR=${BASEFS}/data/cache/tmp
TMPFS_BLACKLIST_TMPDIR=${BASEFS}/data/cache/tmp

Rust may overflow even a chonky RAM config.

/etc/fstab

nano /etc/fstab
tmpfs /tmp tmpfs rw,mode=1777 0 0

This will “intelligently” allocate remaining RAM to the tmpfs mounted at /tmp and builds should mostly happen there.

mount -a

NB

There’s a risk that screwing around with /etc/fstab will break boot – if the system reboots to single user mode, get shell, navigate to /etc/fstab and check for errors or comment out the lines and reboot again.

Posted at 16:42:38 GMT-0700

Category: FreeBSDHowToTechnology

Audio File Analysis With Sox

Wednesday, February 7, 2024

Sox is a cool program, a “Swiss Army knife of sound processing,” and a useful tool for checking audio files that belongs in anyone’s audio processing workflow. I thought it might be useful for detecting improperly encoded audio files or those files that have decayed due to bit rot or cosmic rays or other acoustic calamities and it is.

Sox has two statistical output command line options, “stat” and “stats,” which output different but useful data. What’s useful about sox for this, that some metadata checking programs (like the very useful MP3Diags-unstable) don’t do is actually decode the file and compute stats from the actual audio data. This takes some time, about 0.7 sec for a typical (5 min) audio file. This may seem fast, it is certainly way faster than real time, but if you want to process 22,000 files, it will take 4-5 hours.

Some of the specific values that are calculated seem to mean something obvious, like “Flat factor” is related to the maximum number of identical samples in a row – which would make the waveform “flat.” But the computation isn’t linear and there is a maximum value (>30 is a bad sign, usually).

So I wrote a little program to parse out the results and generate a csv file of all of the results in tabular form for analysis in LibreOffice Calc. I focused on a few variables I thought might be indicative of problems, rather than all of them:

  • DC offset—which you’d hope was always close to zero.
  • Min-Max level difference—min and max should be close to symmetric and usually are, but not always.
  • RMS pk dB—which is normally set for -3 or -6 dB, but shouldn’t peak at nearly silent, -35 dB.
  • Flat factor—which is most often 0, but frequently not.
  • Pk count—the number of samples at peak, which is most often 2
  • Length s—the length of the file in seconds, which might indicate a play problem

After processing 22,000 files, I gathered some statistics on what is “normal” (ish, for this set of files), which may be of some use in interpreting sox results. The source code for my little bash script is at the bottom of the post.

DC Bias

1x1.trans

DC Bias really should be very close to zero, and the most files are fairly close to zero, but some in the sample had a bias of greater than 0.1, which even so has no perceptible audio impact.

Min Level – Max Level

Min level is most often normalized to -1 and max level most often normalized to +1, which would yield a difference of 2 or a difference of absolute values of 0 (as measured) and this is the most common result (31.13%). A few files, 0.05% or so have a difference greater than 0.34, which is likely to be a problem and is worth a listen.

1x1.trans 1x1.trans 1x1.trans

RMS pk dB

Peak dB is a pretty important parameter to optimize as an audio engineer and common settings are -6dB and -3dB for various types of music, however if a set of files is set as a group, individual files can be quite a bit lower or, sometimes, a bit higher. Some types of music, psychobilly for example, might be set even a little over -3 dB. A file much above -3 dB might have sound quality problems or might be corrupted to be just noise; 0.05% of files have a peak dB over -2.2 dB. A file with peak amplitudes much below -30 dB may be silent and certainly will be malto pianissimo; 0.05% of files have a peak dB below -31.2 dB.

1x1.trans

A very quiet sample, with a Pk dB of -31.58, would likely have a lot of aliasing due to the entire program using only about 10% of the total head room.

-31.58 dB

Flat factor

Flat factor is a complicated measure, but is roughly (but not exactly) the maximum number of consecutive identical samples. @AkselA offered a useful oneliner (sox -n -p synth 10 square 1 norm -3 | sox - -n stats) to verify that it is not, exactly, just a run of identical values and just what it actually is, isn’t that well documented. Whatever it is exactly, 0 is the right answer and 68% of files get it right. Only 0.05% of files have a flat factor greater than 27.

1x1.trans

Pk count

Peak count is a good way to measure clipping. 0.05% of files have a pk count < 1000, but the most common value, 65.5%, is 2, meaning most files are normalized to peak at 100%… exactly twice (log scale chart, the peak is at 2).

1x1.trans

As an example, a file with levels set to -2.31 and a flat factor of only 14.31 but with a Pk count of 306,000 looks like this in Audacity with “Show Clipping” on, and yet sounds kinda like you’d think it is supposed to. Go figure.

A ton of clipping

Statistics

What’s life without statistics, sample pop: 22,096 files. 205 minutes run time or 0.56 seconds per file.

StatsDC biasmin ampmax ampmin-maxavg pk dBflat factorpk countlength s
Mode0.000015-110-10.050.002160
Count at Mode4737,6047,6306,8793914,94014,47214
% at mode2.14%34.41%34.53%31.13%0.18%67.61%65.50%0.06%
Average0.00105-0.800.800.03-10.702.03288.51226.61
Min0-10.04800-34.61014.44
Max0.12523-0.047810.497-1.25129.15306,0007,176
Threshold0.1-0.0850.0850.25-2.2271,0001,200
Count @ Thld311106812123545
% @ Thld0.01%0.05%0.05%0.31%0.05%0.05%0.16%0.20%

Bash Script

#!/bin/bash

###############################################################
# This program uses sox to analyize an audio file for some
# common indicators that the actual file data may have issues
# such as corruption or have been badly prepared or modified
# It takes a file path as an input and outputs to stdio the results
# of tests if that file exceeds the theshold values set below
# or, if the last conditional is commented out, all files.
# a typical invocation might be something like:
# find . -depth -type f -name "*.mp3" -exec soxverify.sh {} > stats.csv \;
# The code does not handle single or multi-track files and will
# throw an error. If sox can't read the file it will throw an error
# to the csv file. Flagged files probably warrant a sound check.

##############################################
### Set reasonable threshold values ##########
# DC offset should be close to zero, but is almost never exactly
# The program uses the absolute value of DC offset (which can be
# neg or positive) as a test and is normalized to 1.0
# If the value is high, total fidelity might be improved by
# using audacity to remove the bias and recompressing.
# files that exceed the dc_offset_bias will be output with
# Error Code "O"
dc_offset_threshold=0.1

# Most files have fairly symmetric min_level and max_level
# values.  If the min and max aren't symmetric, there may
# be something wrong, so we compute and test. 99.95% of files have
# a delta below 0.34, files with a min_max_delta above 
# min_max_delta_threshold will be flagged EC "D"
min_max_delta_threshold=0.34

# Average peak dB is a standard target for normalization and
# replay gain is common used to adjust files or albums that weren't
# normalized to hit that value. 99.95% of files have a
# RMS_pk_dB of < -2.2, higher than that is weird, check the sound.
# Exceeding this threshold generates EC "H"
RMS_pk_dB_threshold=-2.2

# Extremely quiet files might also be indicative of a problem
# though some are simply malto pianissimo. 99.95% of files have
# a minimum RMS_pk_dB > -31.2 . Files with a RMS pk dB < 
# RMS_min_dB_threshold will be flagged with EC "Q"
RMS_min_dB_threshold=-31.2

# Flat_factor is a not-linear measure of sequential samples at the
# same level. 68% of files have a flat factor of 0, but this could
# be intentional for a track with moments of absolute silence
# 99.95% of files have a flat factor < 27. Exceeding this threshold
# generates EC "F"
flat_factor_threshold=27

# peak_count is the number of samples at maximum volume and any value > 2
# is a strong indicator of clipping. 65% of files are mixed so that 2 samples
# peak at max. However, a lot of "loud" music is engineered to clip
# 8% of files have >100 "clipped" samples and 0.16% > 10,000 samples
# In the data set, 0.16% > 1000 samples. Exceeding this threshold
# generates EC "C"
pk_count_threshold=1000

# Zero length (in seconds) or extremely long files may be, depending on
# one's data set, indicative of some error. A file that plays back
# in less time than length_s_threshold will generate EC "S"
# file playing back longer than length_l_threshold: EC "L"
length_s_threshold=4
length_l_threshold=1200



# Check if a file path is provided as an argument
if [ "$#" -ne 1 ]; then
    echo "Usage: $0 "
    exit 1
fi

audio_file="$1"

# Check if the file exists
if [ ! -f "$audio_file" ]; then
    echo "Error: File not found - $audio_file"
    exit 1
fi

# Run sox with -stats option, remove newlines, and capture the output
sox_stats=$(sox "$audio_file" --replay-gain off -n stats 2>&1 | tr '\n' ' ' )

# clean up the output
sox_stats=$(  sed 's/[ ]\+/ /g' <<< $sox_stats )
sox_stats=$(  sed 's/^ //g' <<< $sox_stats )


# Check if the output contains "Overall" as a substring
if [[ ! "$sox_stats" =~ Overall ]]; then
    echo "Error: Unexpected output from sox: $1"
    echo "$sox_stats"
    echo ""
    exit 1
fi


# Extract and set variables
dc_offset=$(echo "$sox_stats" | cut -d ' ' -f 6)
min_level=$(echo "$sox_stats" | cut -d ' ' -f 11)
max_level=$(echo "$sox_stats" | cut -d ' ' -f 16)
RMS_pk_dB=$(echo "$sox_stats" | cut -d ' ' -f 34)
flat_factor=$(echo "$sox_stats" | cut -d ' ' -f 50)
pk_count=$(echo "$sox_stats" | cut -d ' ' -f 55)
length_s=$(echo "$sox_stats" | cut -d ' ' -f 67)

# convert DC offset to absolute value
dc_offset=$(echo "$dc_offset" | tr -d '-')

# convert min and max_level to absolute values:
abs_min_lev=$(echo "$min_level" | tr -d '-')
abs_max_lev=$(echo "$max_level" | tr -d '-')

# compute delta and convert to abs value
min_max_delta_int=$(echo "abs_max_lev - abs_min_lev" | bc -l)
min_max_delta=$(echo "$min_max_delta_int" | tr -d '-')

# parss pkcount
pk_count=$(  sed 's/k/000/' <<< $pk_count )
pk_count=$(  sed 's/M/000000/' <<< $pk_count )


# Compare values against thresholds
threshold_failed=false
err_code="ERR: "

# Offset bad check
if (( $(echo "$dc_offset > $dc_offset_threshold" | bc -l) )); then
    threshold_failed=true
    err_code+="O"
fi

# Large delta check
if (( $(echo "$min_max_delta >= $min_max_delta_threshold" | bc -l) )); then
    threshold_failed=true
    err_code+="D"
fi

# Mix set too high check
if (( $(echo "$RMS_pk_dB > $RMS_pk_dB_threshold" | bc -l) )); then
    threshold_failed=true
    err_code+="H"
fi

# Very quiet file check
if (( $(echo "$RMS_pk_dB < $RMS_min_dB_threshold" | bc -l) )); then
    threshold_failed=true
    err_code+="Q"
fi

# Flat factor check
if (( $(echo "$flat_factor > $flat_factor_threshold" | bc -l) )); then
    threshold_failed=true
    err_code+="F"
fi

# Clipping check - peak is max and many samples are at peak
if (( $(echo "$max_level >= 1" | bc -l) )); then
    if (( $(echo "$pk_count > $pk_count_threshold" | bc -l) )); then
        threshold_failed=true
        err_code+="C"
    fi
fi

# Short file check
if (( $(echo "$length_s < $length_s_threshold" | bc -l) )); then
    threshold_failed=true
    err_code+="S"
fi

# Long file check
if (( $(echo "$length_s > $length_l_threshold" | bc -l) )); then
    threshold_failed=true
    err_code+="L"
fi

# for data collection purposes, comment out the conditional and the values
# for all found files will be output.
if [ "$threshold_failed" = true ]; then
    echo -e "$1" "\t" "$err_code" "\t" "$dc_offset" "\t" "$min_level" "\t" "$max_level" "\t" "$min_max_delta" "\t" "$RMS_pk_dB" "\t" "$flat_factor" "\t" "$pk_count" "\t" "$length_s"
fi

Posted at 01:40:52 GMT-0700

Category: AudioCodeHowToLinuxTechnology

Manually Update Time Zone Data on Android 10

Tuesday, October 31, 2023

One of the updates that stops when your carrier decides you have to buy a new phone to keep their profits up is the time zone data, which means as regions decide they will or won’t continue using standard time and will switch permanently to lazy people time (or not), time zone calculations start to fail, which can be awfully annoying when it causes you to miss flights or meetings. It is probably something you’ll want to keep up to date. Unfortunately, this requires root access to your phone because… profits depend on the velocity by which first world money is converted to e-waste to poison third world children. Yay.

Root requires reflashing your device, which means wiping all your data and apps and reinstalling them, so easier to do on a new phone than backing up and restoring and re-configuring all your apps. Sooner or later your vendor will stop supporting your device in an attempt to get you to throw it away and buy a new one and you’ll have to root it to keep it up to date and secure so you might as well do it now, void their stupid warranty, and take control of your device.

You should also take a moment to write your elected representatives and demand that they take civil action against this crap. Lets take a short rant break, shall we?

Planned obsolescence, death by security flaws, and vendor locks should be prosecuted, not just as illegal profiteering but as environmental crimes for needlessly flooding the world with e-waste. If you own a device you have the right to use it as you like and any entity that by omission or obfuscation of reasonable information needed to keep that device operational is depriving legitimate owners of rightful value. Willfully obstructing security updates, knowing full well the risks implied, is coercive if not extortion. Actively blocking the provision of third party services intended to mitigate these harms through barratry and legal extortion should be prosecuted aggressively. Everyone who has purchased a phone that has been intentionally and unfairly life-limited by non-replaceable batteries, intimidation of repair services, manipulation of the spare parts market, or restrictions or obfuscation of security updates is due refund of the value thus denied plus penalties.

Ah, that feels better, no?

Assuming you have a rooted phone, adb installed on your computer, and your TZ data is out of date, lets get it fixed, shall we? The problem is that TZ data comes from IANA, from here actually, and is versioned in a form like 2023c, the current as of now. That’s lovely but the format they provide is not compatible with android and needs to be transformed. Google seems to have some tools for this in the FOSS branch of Android, but it seems a little useless without a virtual environment, a PITA. But the good folks at LineageOS (yay, FOSS!!!) maintain their version of the tool with the thus created output data in their git, which we can use for all android devices (it seems). The files we need are in this directory: note that these are 2023a, but 2023c is identical to 2023a, reverting some changes made in 2023b because, I don’t know, the whole mess about getting up an hour earlier or later being some traumatic experience when it happens twice a year is catastrophic for people’s sense of well being, but when they get up at different times on days off than on work days, that doesn’t count or something. OMG. so drama. people. sometimes it hurts to be associated with them as a species. Not that I care, but stop messing around and just pick one. So many rant triggers in this whole mess.

Anyway, proceeding with the assumption your device is rooted and you have adb installed on your computer, the files needed are:

tzdata        a binary file that if you view with a text editor should start with: tzdata2023a
tzlookup.xml  an xml file that should (nearly) start with: 
tz_version    a simple text file that should have one line: 003.001|2023a|001

Download the compressed .tgz archive of the output_data directory from here by clicking on the [tgz] text at the top right

You should get a .tgz archive, from which you want to extract:

  • tzlookup.xml from the android folder
  • tzdata from the iana folder
  • tz_version from the version folder

Here’s the tricky bit, you gotta get these files to the right places. So I mounted my android on my computer and created a folder TZData in Downloads and copied the files there, this resolved to /data/media/0/Download/TZdata/ on my device. While you’re there, make a folder like oldTZ in the same place for backup. Everything else is done by command line via adb.

(comments are demarked with "#", the prompt is assumed)
# get shell on your device
adb shell
# get root, if this fails, you don't have root, bummer, you don't really own your device.
su root
# verify your tz data is where mine was, if so copypasta should be safe.
find / -name tzdata 2>/dev/null
#output for me looks like some are symlinks
/apex/com.android.tzdata/etc/tz/tzdata
/apex/com.android.tzdata@290000000/etc/tz/tzdata
/apex/com.android.runtime/etc/tz/tzdata
/apex/com.android.runtime@1/etc/tz/tzdata
/system/apex/com.android.runtime.release/etc/tz/tzdata
/system/apex/com.android.tzdata/etc/tz/tzdata
/system/usr/share/zoneinfo/tzdata
# did ya get the same or close enough to figure out what to do next? good.
# Backup your old stuff
cp /system/apex/com.android.tzdata/etc/tz/* /data/media/0/Download/oldTZ
# your directories are read only, so you need to fix that, scary but reversible
mount -o rw,remount /
mount -o rw,remount /apex/com.android.tzdata
mount -o rw,remount /apex/com.android.runtime
# copy the new files over the old files, the last location is legacy and doesn't
# seem to have a copy of tzlookup.xml, so we don't put a new one there, but check
ls /system/usr/share/zoneinfo
# only tzdata and tz_version?  Good.
cp /data/media/0/Download/TZdata/* /apex/com.android.tzdata/etc/tz
cp /data/media/0/Download/TZdata/* /apex/com.android.runtime/etc/tz
cp /data/media/0/Download/TZdata/* /system/apex/com.android.tzdata/etc/tz
cp /data/media/0/Download/TZdata/tz_version /system/usr/share/zoneinfo
cp /data/media/0/Download/TZdata/tzdata /system/usr/share/zoneinfo
# all done, now we just gotta read-only those directories again
mount -o ro,remount /
mount -o ro,remount /apex/com.android.tzdata
mount -o ro,remount /apex/com.android.runtime
# and why not reboot from the command line?
reboot

That was fairly painless once you know what to do and have root, no? it worked for me, my phone rebooted and the time zone database appears to be updated. YMMV, hopefully not the reboot successfully part but bricking a phone is a risk because, you know, profits. After that tz file surgery I created a new event in a US time zone that recently changed their daylight savings to pacify the crazies and it seemed to work as expected.

Posted at 18:25:30 GMT-0700

Category: Cell phonesGeopostHowToLinuxTechnology

Autodictating to self using Whisper to preserve privacy

Thursday, August 17, 2023

Whisper is a very nice bit of code released by OpenAI, the kind people who brought us ChatGPT. It’s a speech to text tool that can handle a huge array of languages and runs locally, as in on your hardware with your data. There’s an API you can use on their servers, but only if you are sure the audio files and text can be released to the public. Never put any data on anyone else’s hardware that you wouldn’t want to have leaked on pastebin or published in the New York Times; that goes for all services including gmail, Outlook, Office 365, etc. Never, ever use someone else’s hardware to store proprietary or sensitive data. It’s just mind-bogglingly stupid, and yet so many people fail to comprehend that “in the cloud” just means “on someone else’s computer.”

This is also true for most speech-to-text tools that (seemingly) kindly offer to translate your ramblings to text out of the goodness of the developer’s hearts. Lots of people use this feature on their phones without realizing that, like Alexa, any voice command tool is an audio monitoring device you stupidly paid for and installed yourself on behalf of corporate spies who are all too happy to listen to whatever you have to say. If you have an Alexa, get a hammer right now and smash it. Go on, I’ll wait. Good job. Privacy restored. Oh, smart TV too? Unplug that stupid thing from the internet. Same for all your “smart” devices. You thought “smart” meant you were smart for buying it? Noooo… you’re a moron for buying it, the company was smart for convincing you to install monitoring devices in your house at your own expense. Congrats. Own goal. When you’re finished destroying all your corporate spyware here’s a way to get speech to text capability on your own hardware without the spying thanks to a very nice bit of FOSS code from OpenAI.

The workflow is to record some audio (speech probably) on your phone, store & forward that to your server (no synchronous connection required, unlike most spyware), (optionally) store and forward that to your desktop computer with a GPU to run AI text to speech, pop the results into an email queue to store & forward it back to you and all your searchable text archives. Speech is converted to accessible, indexed text easily and robustly and fairly legibly.

For the recording step, I use an Open Source app called Audio Recorder (available on F-Droid and other reliable repositories; if you need an app, try F-droid first and only use Play Store after deciding it is worth being spied on and having ads pushed to you). Audio can be any length, seconds or hours. I configured the settings to record to /storage/emulated/0/recordings and use 48khz, 16 bit, opus for speech; on my device the app supports up to 24bit/192khz, which vastly exceeds the S:N ratio and bandwidth of any microphone I’ll connect to a phone, but nice to know for audiophiles.

I also run NextCloud on my phone which connects to a NextCloud instance on my own server. NextCloud is like a free, open source version of dropbox and provides directory sharing, calendar, password, etc – almost all services you want a server for on your own hardware so you actually retain possession and ownership of your data – amazing! You do not have to give away your data to people you don’t know to use the internet.

The NextCloud client on my phone tries to sync the recording folder to my server so after I make a recording and hit the ✅ button, when the aether makes it possible the audio is uploaded (and, optionally, deleted from the mobile device). Nextcloud then syncs down to other clients, specifically one of my Linux clients for processing. It is entirely possible to do everything server side and the same scripts will work, but I don’t have a GPU on my server and Whisper has some dependencies that are easier to meet on a more frequently updated client, at least for now.

I’ve installed whisper on a Linux box, along with a NextCloud client and there I have a fairly simple script running as a cron job. Every 10 minutes it scans all the files in the locally synced “Recordings” directory and if there’s an audio file without a matching text “TSV” file, it calls whisper to convert the audio to text and then emails me the converted text. That text is also synced back up to the server and to any other synced device and indexed both on the server and locally to make it easily discoverable (on clients I use the very awesome Recoll for indexing).

The whole process is very easy and any audio file like this:

is then automagically converted to text

test if we can record in Opus and then autoconvert the file back to text and
get that text as an email automatically this seems like quite a powerful tool
and should make it fairly easy to self take notes don’t we think yes

and then ends up in my inbox like this:

1x1.trans

So what script does this good thing? Just a few bash lines. This version uses the time stamps in the TSV files to throw in fairly reasonable paragraph breaks. If the speaker pauses long enough that Whisper inserts a timing break, the script printfs in two newlines. There are a few other tricks below to try to infer or force reasonable paragraph breaks.

It also uses a slightly more robust construction to extract the subject of the email, which includes the first 60 characters of the text, minus any new lines (which make mailx barf). The resulting text is flowed, pretty easy to copypasta into an email or document, and has moderately natural paragraph breaks. It isn’t publication ready, but the accuracy seems quite good and it is hard to imagine an easier mechanism for making useful autodictations. The process supports very long rambling diatribes, you should be able to talk for hours and get book’s worth of text in your inbox. I mean, maybe you shouldn’t be able to do that, but you can.

I put in a feature request with the Audio Recorder devs to add some metainfo to the files; what I’d really like is location data. I can script up extracting that and (optionally) converting it to a place name, but aside from Nominatim or Gisography, there aren’t many options other than using big data APIs. Anyway, seems like a reasonable bit of metadata to insert at the top or tail of the text: time+date+location the stream was recorded. If it is implemented, I’ll update to script to extract the metadata and create a dateline header.

Mailing flowed plain text

I found that mailx can’t handle long (flowed) text lines over ~1000 characters and inserts \n at 998 or 997, which breaks up the pause to paragraphs code, so I switched the mailer to mpack (sudo apt install mpack) which simplifies the mail command and MIME encodes the text body and adds a checksum and a few other modern mail niceties and it now flows as desired without weird line breaks.

And then I found out that mpack thinks it is too good to send text files, it sets the MIME type to application/octet-stream and using the -c text/plain option yields the somewhat prissy error This program is not appropriate for encoding textual data oh my. Thunderbird actually parses the attachment into a nicely flowed email, ignoring the quirks, but the best mobile client ever, FairEmail, does not and treats the attachment as something that it would prefer not to display inline (thanks for the details Marcel, you’re awesome!), given mailx isn’t very active any more changing that behavior is unlikely. Next option: Mutt. Mutt does something to a text attachment (using the -a option) that causes both TB and FairEmail to decline to display inline, but the body option -i yields a clean text-only email with the right flow, meaning no random line breaks inserted, so don’t install mpack, but sudo apt install mutt and create a /home/{user}/.muttrc file with at least the below (search engine around if you need to use a remote SMTP server to configure the server address, authentication, and encryption; mutt does the right things):

set realname = "{desired name}"
set from = "{your from email}"
set use_from = yes
set envelope_from = yes

And once that (and whisper) is working, the following script will convert your audio file to text and then mail it to you with paragraph breaks.

TextTiling

I didn’t plan to get into anything more complex, but long text conversions are kinda unreadable because Whisper doesn’t infer text. There’s a whole science to inferring contextual shifts that should start new paragraphs using LSA/LDA/LSI that’s quite advanced mathematically and works sort of OK but is an awful lot of pipping modules and trying this or that.

I opted instead to go for a more brute force method, well three of them, really:

First: whisper has an experimental feature to compute word timings, which would normally be used to generate those unbelievably distracting and annoying and utterly horrible subtitles that are one word at a time or bouncing highlight word by word, but the feature can do more than create a miserable, distracting, utterly pretentious viewing experience: they seem to increase the frequency and possibly accuracy of gaps in the exported timing data. The first method of paragraph finding is detecting “long” gaps after a Whisper inferred sentence, effectively deriving speaker intent from cadence and AI content inference. It works OK.

Second: I implemented a wake_word:command set that seds through the text and search-replaces the wake_word:command with the requested punctuation: .¶,:()…—?!“” There’s a whole theory behind wake words, but “insert” seems to be understood well and the command terms are ones that I tend to think of (e.g. “dots” not “ellipsis”), but that’s all obviously editable to preference.

Third: recommended paragraph length depends on the target and advice ranges from 3 sentence to 6. I tend to be a bit long winded so I picked 5. There’s an arbitrary script to look for any line that, after the timing inference and explicit breaks, still has more than 5 sentences and breaks it into multiple lines (meaning paragraph splits when the text is rendered). If that’s too long or too short, change the 5 in /usr/bin/sed -i "s/[.?!] /.\n\n/5;P;D" "$txt_file".

This all work fairly well, though there’s a known quirk with Whisper where it just randomly stops inserting punctuation after about 10 minutes and mechanisms 1 and 3 obviously also fail. The way to deal with that is to break the audio into about 5 minute segments and then concatenate the results, but it’s a moderate chunk of code and debug and I’m assuming whisper will be updated. If not and it gets annoying, I’ll work out that routine.

The script

Replace {user} and {domain} as appropriate to your system. You may also have a different layout for commands, which bin (for example) is your friend. I find full paths in cron execution provides better consistent reliability at the expense of portability.

#!/bin/bash 

watchdir="/home/username/Work/Recordings/"
to="email@domain.com"
stop_prev="0"
start=""
stop=""
text=""
wake_word="insert"

# Function to check if an audio file has a matching .txt file, then convert to text and email it
convert_to_text() {
    audio_file_file="$1"
    txt_file="${audio_file%.*}.txt"
    tsv_file="${audio_file%.*}.tsv"
    dir="$(/usr/bin/dirname "${audio_file}")"
    base_ext="$(/usr/bin/basename "${audio_file}")"
    base="${base_ext%.*}"


    if [ ! -e "$tsv_file" ]; then
        /home/gessel/.local/bin/whisper "$audio_file" -f tsv --model small.en -o $dir --word_timestamps True --prepend_punctuations True --append_punctuations True --initial_prompt "Hello."

        while IFS=$'\t' read -r start stop text; do
            # First line detection and skip checking it for gaps
            if [ $start == "start" ]; then
                /usr/bin/printf "" > "$txt_file"
                continue
            fi
            # Check if line ends in period or question mark for paragraph insertion
            if [[ $text =~ \.$|\?$ ]]; then
                # find natural pauses and insert paragraph breaks
                if [[ $stop_prev != $start ]]; then
                /usr/bin/printf "\n\n" >> "$txt_file"
                fi
            fi
            /usr/bin/printf "$text " >> "$txt_file"
            stop_prev=$stop
        done  < "$tsv_file"

        stop_prev="0"
        # search for explicit formatting commands and in-line replace them.
        /usr/bin/sed -i "s/[?,. ]*$wake_word period[?,. ]*/. /gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word paragraph[?,. ]*/.\n\n/gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word comma[?,. ]*/, /gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word colon[?,. ]*/: /gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word open paren[?,. ]*/ (/gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word close paren[?,. ]*/) /gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word dots[?,. ]*/… /gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word long dash[?,. ]*/—/gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word question[?,. ]*/? /gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word exclamation[?,. ]*/? /gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word open quote[?,. ]*/ “/gI" "$txt_file"
        /usr/bin/sed -i "s/[?,. ]*$wake_word close quote[?,. ]*/” /gI" "$txt_file"
        # brute force paragraphing: 5 sentences is enough, adjust for audience
        /usr/bin/sed -i "s/\([.?!]\) /\1\n\n/5;P;D" "$txt_file"
        # fix any sentence start/finish errors induced by the above edits
        /usr/bin/sed -i "s/^[a-z]/\U&/g" "$txt_file" # start with uppercase
        /usr/bin/sed -i "s/: [A-Z]/\L&/g" "$txt_file" # no uppercase after colon
        /usr/bin/sed -i 's/\s\+$//g' "$txt_file" # don't end with whitespace
        /usr/bin/sed -i "s/[,]$/./g" "$txt_file" # don't end with a comma, use .
        /usr/bin/sed -i '/[.?!]$/! s/$/./' "$txt_file" # if not ending with punctuation at all, add .
        /usr/bin/sed -i 's/^\.$//'  "$txt_file" # oops, no lines with just periods 
        /usr/bin/sed -i "s/\([a-z]\) \./\1./g" "$txt_file" # remove any spaces before periods
        /usr/bin/sed -i "s/  / /g" "$txt_file" # no double spaces
        /usr/bin/sed -i 's/\([0-9]\+\) \([FC]\) /\1°\2 /g' "$txt_file" # write temp to AMA, Chicago, Nat Geo, NOT APA or NIST
        # generate subject line from first sentence no longer than 80 char and remove any newlines
        subject=$(/usr/bin/head -n 1 -c 80 "$txt_file" | /usr/bin/sed 's/\(.*\)\..*/\1/')
        subject=$(/usr/bin/echo $subject | /usr/bin/tr -d '\n')
        subject=$(/usr/bin/echo $subject | /usr/bin/tr -d '\r')
        # send the cleaned up file as email
        /usr/bin/echo "" | /usr/bin/mutt  -F /home/gessel/.muttrc -s "AudioText - $base - $subject" -i "$txt_file" $to
    fi
}

# Main script scan the watch dir for unprocessed files (within the last 30 days)
/usr/bin/find "$watchdir" -mtime -30 -type f \( -iname \*.opus -o -iname \*.wav -o -iname \*.ogg -o -iname \*.mp3 \) | while read audio_file; do
    convert_to_text "$audio_file"
done

Note that Whisper has a lot of tricks not used here. I’ve used it to add subtitles to lectures and it can do things like auto-translate one spoken language into another text language, and much more.

Posted at 10:53:58 GMT-0700

Category: CodeHowToLinuxTechnology

Projecting Qubit Realizations to the Cryptopocalpyse Date

Friday, August 4, 2023

RSA 2048 is predicted to fail by 2042-01-15 at 02:01:28.
Plan your bank withdrawals accordingly.

Way back in the ancient era of 2001, long before the days of iPhones, back when TV was in black and white and dinosaurs still roamed the earth, I delivered a talk on quantum computing at DEF CON 9.0. In the conclusion I offered some projections about the growth of quantum computing based on reported growth of qubits to date. Between the first qubit in 1995 and the 8 qubit system announced before my talk in 2001, qubits were doubling about every 2 years.

I drew a comparison with Moore’s law that computers double in power every 18 months, or as 2(years/1.5). A feature of quantum computers is that the power of a quantum computer increases as the power of the number of qubits, which is itself doubling at some rate, then two years, or as 22(years/2), or, in ASCII: Moore’s law is 2^(Y/1.5) and Gessel’s law is 2^2^(Y/2).

Quantum Computing and Cryptography 2001 7.0 Conclusion slide

As far as I know, nobody has taken up my formulation of quantum computing power as a time series double exponential function of the number of qubits in a parallel structure to Moore’s law. It seems compelling, despite obviously having a few (minor) flaws. A strong counter argument to my predictions is that useful quantum computers require stable, actionable qubits, not noisy ones that might or might not be in a useful state when measured. Data on stable qubit systems is still too limited to extrapolate meaningfully, though a variety of error correction techniques have been developed in the past two decades to enable working, reliable quantum computers. Those error correction techniques work by combining many “raw” qubits into a single “logical” qubit at around a 10:1 ratio, which certainly changes the regression substantially, though not the formulation of my “law.”

I generated a regression of qubit growth along the full useful quantum computer history, 1998–2023, and performed a least-squares fit to an exponential doubling period and got 3.376 years, quite a bit slower than the heady early years’ 2.0 doubling rate. On the other hand, fitting an exponential curve to all announcements in the modern 2016–2023 period yields a doubling period of only 1.074 years. The qubit doubling period is only 0.820 years if we fit to just the most powerful quantum computers released, ignoring various projects’ lower-than-maximum qubit count announcements; I can see arguments for either though selected the former as somewhat less aggressive.

Relative Power of Classical vs. Quantum Computers

From this data, I offer a formulation of what I really hope someone else somewhere will call, at least once, “Gessel’s Law,” P = 22(y/1.1) or, more generally given that we still don’t have enough data for a meaningful regression, P = 22(y/d); quantum computational power will grow as 2 to the power 2 to the power years over a doubling period which will become more stable as the physics advance.

Gidney & Ekra (of Google) published How to factor 2048-bit RSA integers in 8 hours using 20 million noisy qubits, 2021-04-13. So far for the most efficient known (as in not hidden behind classification, should such classified devices exist) explicit algorithm for cracking RSA. The qubit requirement, 2×10⁷, is certainly daunting, but with a doubling time of 1.074 years, we can expect to have a 20,000,000 qubit computer by 2042. Variations will also crack Diffie-Hellman and even elliptic curves, creating some very serious security problems for the world not just from the failure of encryption but the exposure of all so-far encrypted data to unauthorized decryption.

Based on the 2016–2023 all announcements regression and Gidney & Ekra, we predict RSA 2048 will fall on 2042-01-15 at 2am., a prediction not caveated by the error correction requirement for stable qubits as they count noisy, raw, cubits as I do. As a validity check, my regression predicts “Quantum Supremacy” right at Google’s 2022 announcement.

Qubit Realization by Date and several regression curve fits to the data

IQM Quantum Computer Espoo Finland, by Ragsxl

Posted at 05:34:25 GMT-0700

Category: PrivacyTechnology

AI PSYOPS are changing strategic messaging

Saturday, July 29, 2023

Social media fundamentally changed strategic messaging, cutting the cost per effect by at least two orders of magnitude, probably more. It has become the most cost effective munition in the global arsenal. Even when it took teams of actual humans to populate content and troll farms to flood social media with messaging intended to result in a desired outcome, for example to swing an election, start a war, damage alliances, break treaties, or generate support for one particular policy, foreign or domestic, or another, it was still a revolution in reduced cost warfare.

Take Operation INFEKTION, the active measure campaign run by the KGB starting in about 1983″to create a favorable opinion for us abroad that this disease (AIDS) is the result of secret experiments with a new type of biological weapon by the secret services of the USA and the Pentagon that spun out of control.”

This campaign leveraged assets put in place as far back as 1962 and eventually consumed the authority of Prof. Jakob Segal as a self-referential authoritative citation. After a little more than a decade of relentless media placements of strategic messaging, even in the United States more than 25% of the population had been convinced AIDS was a government project and 12% had been manipulated into believing it was created and spread by the CIA. This project was tremendously successful despite having to overcome the then standard and generally principled editorial gate keeping that protected “traditional” media from abuse and cooptation by manufacturing plausible chains of authority and fabricating deep and broad reference chains to thwart fact checking.

By the 2016 Election, the KGB’s successors, the IRA and GRU, efficiently and expertly leveraged social media to achieve even more impressive results, possibly winning the most significant military battle in history, to alter the outcome of the US election at a cost of only a few billion dollars and within a mere 2-3 years of effort.

Any-to-any publishing circumvents editorial protections (he writes without a trace of irony). What might otherwise be a limitation of psyop being clearly outside any authorative endorsement, something that required the consumption of an asset like Jakob Segal to achieve in an earlier era, has been overwhelmingly diminished by a parallel effort to destroy trust in institutions and authority creating a direct path to shape the beliefs of targets through mass individualization of messaging unchecked by any need for longitudinal reputation building.

That the 2016 effort still cost billions, requiring a massive capacity build of English speaking, internet savvy teams inducted into “troll farms,” (many ironically located in Bulgaria given that county’s role in Operation INFEKTION) may already be obsolete just 8 years later.

Many have written about ChatGPT representing some sort of existential risk to humanity’s future, some quick resolution to the Fermi Paradox, but the real risk is an acceleration of the destruction of objective truth and the substitution conceptual paradigms that align with strategic outcomes.

As an example, let me introduce to you Dr. Alexander Greene, a person ChatGPT tells us “is a highly esteemed and celebrated professor with a remarkable career dedicated to advancing the fields of green energy and engineering.”

ChatGPT fabricates the lauded synthetic Dr. Alexander Greene

Obviously, it’s hard to really believe Dr. Greene without seeing the man himself, but fortunately we have a tool for that too:

Dr Alexander Greene, synthetic professor of green things and advocate for fossil fuels.

A few images from bing/Dall-E and we can create a very convincing article that would easily pass muster as an authoritative discussion on the benefits of continuing to burn fossil fuels with minimal editing and formatting, just to cut out the caveats that ChatGPT inserts in counterfactual text requests we can have such pearls of wisdom to impart upon the world as:

Access to affordable and reliable energy is a crucial driver of economic development, and historically, fossil fuels have played a significant role in providing low-cost energy solutions. While there are concerns about the environmental impact of fossil fuels, particularly their contribution to climate change, it is essential to understand the benefits they have brought to the developing world and the potential consequences of increasing energy costs.

Read the whole synthetic article in pdf form below and consider the difficulty of finding a shared factual foundation in a world where it is trivial to synthesize plausible authority.

The Benefits of Low-Cost Energy from Fossil Fuels and the Impact of Increasing Energy Costs on Developing Nations, “by” “Dr. Alexander Greene” (ghost written by ChatGPT).

Posted at 17:02:46 GMT-0700

Category: SecurityTechnology