No-cost PAL to NTSC conversion

This is a simple formula for low-cost conversion from PAL to NTSC material or, more accurately, 576/25i to 480/30i (or 30000/1001 fps to be exact). It uses field-blending, and there is no motion interpolation. Therefore, the method is quick to use on ordinary computers, but will not meet broadcast standards if you use it for a sustained length of time, or for a large part of your footage in a completed programme.

Prerequisties are AviSynth and FFmpeg with the AviSynth front end compiled in. You can get this, compiled for Windows 64-bit, from my website (see earlier blogs).

First, get your material into 25fps form. My source for this test was a DVD of a film, so the pictures are PsF (progressively scanned in an interlaced frame).

Use an AviSynth script like this, which I’ve called 25to2997.avs. I’ve used the FFmpegSource2 plugin to import the footage, but you can use anything else for input if you have it.


# Import the video. Disregard the sound for now.
FFmpegSource2("C:\Users\john_000\Videos\billsfilm\film-stereo.mp4")
# If this is a DV source, you'll want to switch the fields because PAL DV tapes
# record the fields the wrong way around i.e. Bottom Field First instead of
# Top Field First
# ComplementParity()
# Now deinterlace the pictures, producing a whole new frame for each field
Bob()
# ConvertFPS is AviSynth's function to convert frame rate by blending frames
# to achieve an illusion of continuous motion.
# The first number is the numerator, and the second the denominator, for fractional
# frame rates such as 29.97
ConvertFPS(60000, 1001)
# Now blend the frames back into interlaced fields
SeparateFields.SelectEvery(4,0,3)
Weave()

You now have interlaced, field-blended video at 29.97fps. If this is a PAL to NTSC standard-definition conversion, you’ll also need to change the size of the frames from 720×576 to 720×480.

This is where FFmpeg can perform the scaling and the encoding in one pass. Some other problems with the incoming video are corrected also. FFmpeg additionally brings in the audio from the original file, disregarding the video using the -map option. Here’s my command line for converting this widescreen PAL film into a widescreen NTSC file suitable for burning to a DVD.

So, what you have here is an FFmpeg command line that:

  • uses AviSynth to create a field-blended input,
  • sends this to FFmpeg,
  • sends the audio track of the original film to FFmpeg,
  • scales it using the highest-quality algorithm ensuring that it is interlaced-aware scaling that is used,
  • sharpens it a little because 480i pictures are awfully soft to my eyes,
  • makes the colour standard clear,
  • sets the aspect ratio explicitly,
  • sets the correct field labelling, because it came in incorrectly,
  • sets certain MPEG-2 video coding parameters for higher quality,
  • sets a bit-rate that will allow this particular movie to fit on a single-layer DVD,
  • brings in the audio track and wraps it up into a convenient container for DVD burning.


$ ffmpeg -i C:\Users\john_000\Documents\25to2997.avs -i C:\Users\john_000\Videos\billsfilm\film-stereo.mp4 -map 0:0 -map 1:1 -vf scale=720:480:lanczos:interl=1:out_color_matrix=bt601,smartblur=1.0:-0.6,setdar=16/9,setfield=tff -target ntsc-dvd -flags +ilme+ildct -bufsize 1835008 -b:v 5.5M -aspect 16:9 -g 15 -bf 2 -trellis 2 -cmp 3 -subcmp 3 -mbd 2 -me_method epzs -intra_vlc 1 -acodec ac3 -b:a 256k film-stereo-30fps.vob

Court Order: Lewes District Council v. Network Rail

Background: this is the injunction issued today to delay Network Rail’s alterations to the historic railway level crossing at Plumpton, pending planning approval. The works began without such approval.

In the County Court at Brighton

Claim number: B01BN100
Date: 20 October 2015

LEWES DISTRICT COUNCIL (Claimant) ref: LCD/JCS/4367
NETWORK RAIL INFRASTRUCTURE LTD (290587) (Defendant)

Before His Honour Judge Simpkiss sitting at the County Court at Brighton, William Street, Brighton, East Sussex, BN2 0RF

Upon hearing the Solicitor for the Claimant

To:
Network Rail Infrastructure Limited (290587)
Network Rail
One Eversholt Street
London
NW11 2DN

PENAL NOTICE

IF YOU THE WITHIN NAMED, NETWORK RAIL INFRASTRUCTURE LIMITED, BY ITS SERVANTS, AGENTS, OFFICERS OR OTHERWISE, DISOBEY THIS ORDER YOU MAY BE HELD TO BE IN CONTEMPT OF COURT AND LIABLE TO IMPRISONMENT OR FINED OR YOUR ASSETS SEIZED.

An application was made today 20th October 2015 by Lewes District Council Legal Services for the Applicant to HHJ Simpkiss who heard the ex parte application.

The judge read the witness statement of Andrew Hill listed in Schedule A.

As a result of the application IT IS ORDERED THAT Network Rail, by its servants, agents, officers or otherwise be forbidden from causing or permitting any works to be carried out without Listed Building Consent for the removal of the Plumpton Level Crossing Gates, or recovering any associated equipment or any mechanism for the operation of the gates situate at Station Road, Plumpton Green, until 4th November 2015 with permission to apply on 48 hours notice to vary or set aside this order.

COSTS OF THE APPLICATION
The Respondent shall pay the Applicant’s costs of this Application.

RETURN DATE
The Injunction is adjourned to 10am on the 3rd November 2015 at Lewes County Court, High Street, Lewes, BN7 1YB.

SCHEDULE A
The Judge read the following witness statement before making this Order:
Mr. Andrew Hill

Make the Edirol or Roland UM-1 and UM-1X work on Windows 10

Roland, who make the venerable MIDI to USB interface the UM-1, and its more recent version the UM-1X, claim that they will not support Windows 10. And, indeed, when you install Windows 10 onto a machine with the UM-1X plugged in, it remains unrecognised by the new operating system.

You can fix this with a text editor. Remember also you must set your Windows installation NOT to enforce driver signing. Please see the comments (below) for how to do this.

  1. Download from Roland the driver archive for Windows 8.1. Its filename is um1_w81d_v101.zip.
  2. Unpack the archive.
  3. If you have a 64-bit machine, browse within the archive to this folder:
    um1_w81d_v101\Files\64bit\Files

  4. Open the file RDIF1009.INF in your favourite text editor.
  5. Edit line 33, changing:
    %MfgName%=Roland,NTamd64.6.2,NTamd64.7…to
    %MfgName%=Roland,NTamd64.10.0,NTamd64.7
  6. Edit line 42, changing:
    [Roland.NTamd64.6.2] …to
    [Roland.NTamd64.10.0]

  7. Save this file and exit the editor.
    One successful user reported that he plugged-in the UM-1 at this point.
  8. Browse to your Device Manager by holding down the ‘Windows’ key, pressing ‘R’, then selecting “Device Manager” from the menu that appears.
  9. Double-click on your non-functioning UM-1, which will be labelled “Unknown device”.
  10. Select the second tab: “Driver”
  11. Click the “Update driver…” button
  12. Click “Browse my computer for driver software”
  13. Browse to the folder containing the file you have just edited
  14. Click ‘OK’ to select the directory, then click ‘Next’
  15. When Windows complains “Windows can’t verify the publisher of this driver software”, click “Install this driver software anyway”
  16. Wait until you see “Windows has successfully updated your driver software.

Done! You’ve installed the Windows 8.1 driver for the Roland/Edirol UM-1X on Windows 10, even though Roland state they don’t support this device. I can confirm that the MIDI input works just fine.

Converting Avid Meridien MJPEG media with FFmpeg

One of the older native codecs in Avid, the Meridien motion-JPEG (MJPEG) codec, can be read by the FFmpeg utility and, therefore, converted into other more popular codecs and wrappers. However, there is a caveat concerning the luminance levels.

Meridien MJPEG files, whether wrapped as MXF or OMF, are signalled as full-range (0 – 255) files, but in fact they contain limited range (16 – 235) data, leaving room for super-white and super-black excursions. Their pixel format is read as yuvj422p but, in conversion to the usual yuv420p used for most consumer-oriented formats, the range is reduced yet again. So, this must be prevented.

Remember also that Avid’s Meridien files contain 16 blank lines at the top for VBI information. These need to be cropped.

There is a simple solution, using FFmpeg’s scaler. For SD 576-line files (“PAL” standard), include, early in the video filter chain, this:

scale=src_range=0:dst_range=1,crop=x=0:y:16:w=720:h=576

For 480-line (“NTSC” standard) files, use this:

scale=src_range=0:dst_range=1,crop=x=0:y=16:w=720:h=480

A full command line to convert a UK television picture might be:

$ ffmpeg -i VIDEO.mxf -i AUDIO.wav -vcodec libx264 -acodec libfdk_aac -vf scale=src_range=0:dst_range=1,crop=x=0:y=16:w=720:h=576,setdar=16/9 -aspect 16:9 VIDEO-AND-AUDIO-H264.mkv

Useful Avid Media Composer Console commands

While waiting for a file to convert, I typed help commands into the Avid console. These might be useful to some Avid users. There are many, many more.

The command every Avid user should know is subsys monpane debug, allowing precomputes and other unusual media to be loaded directly into the composer window, and used as edit sources.

These console commands are from Media Composer 8.2.2. Later versions may vary.

Access the console from the “Tools” menu, or by typing CTRL-6 or COMMAND-6.

AllDrives 1

Makes all drives act as media drives

AllDrives 2

Makes all drives behave normally

AllowCrossRateTranscode

This is a toggle setting. It produces the warning: “Clips created using the cross-rate transcode feature can only be used for playback. The newly generated media is valid for playback, but the clips cannot be used with many operations. They are not supported by operations such as Modify, Relink, Batch Capture, Batch Import, Decompose, Import/Export, and use within the Interplay environment.

AllowUNCWrite

This is a toggle. Permits Avid to write to shares on Windows-style networks that are not mounted as drive letters such as \\OFFLINE_001\MyDirectory\Filename.mov

AMA_SetLoggingLevel

0x0 = Errors, 0x01 = Warnings, 0x02 = Verbose, 0x04 = Trace, 0x08 = Info, 0xff = All

answer

Create a pop-up asking a question with up to three click-button choices, and return the result. Example answer "How many beans make five?" with "One" OR "Two" OR "Five"

asiocontrol

Opens your ASIO control panel for audio I/O. If you don’t use this interface, your operating system’s audio control panel opens.

audioextras

Enable or disable extra audio features. A dialogue box appears.

Disable3D

Disables some OpenGL code. May be useful for older video adapters.

Enable3D

Re-enables some OpenGL if earlier disabled with Disable3D

ForceHDTranscode

True: forces transcode of HD media to SD before export; False: makes transcoding of HD media optional before export

HDTitleFilter

A toggle: controls the filtering of HD titles during downconversion to SD

IgnoreQTRate

For video-only QuickTime files: ignores file frame rate, and imports file frame-by-frame. Otherwise, QuickTime files with frame rates different from your project are imported with a crude frame-dropping or frame-repeating speed-change added. You’ll see this a lot in cinema films where television archive has been incorrectly imported.

LegacyOverlay

Off: use best desktop video overlay method advertised by OS. On: use older methods for graphics adapters with incomplete OpenGL implementations. In case of failure (e.g. Avid not starting) hold “L” and “O” at power up to force this mode once, then enter the console command.

MulticamPreload

Changes preloaded frames for multicam editing. May reduce stuttering.

RenameMediaFiles

Without any arguments, renames ALL files within Avid MediaFiles to more accurately reflect the project name and clip name.

ResampleCapturedAudio

May be useful during ingest if someone has recorded a DV tape with 32kHz sampled audio

subsys monpane debug

Puts the Monitor pane subsystem into debug mode. Very useful: allows precomputes and raw media files to be displayed and edited into timelines.

TCBreakTolerance

Alter Avid’s tolerance to timecode skips. Currently, anything under than 7 frames starts a new clip

Play Avid Meridien MXF or OMF with FFplay

You may have found an old Avid drive containing MXF or OMF files compressed with Meridien codecs. Sometimes these are known by their compression ratio, e.g. “2:1” or “14:1”.

Because of the combination of the MXF/OMF container and the Meridien codec, rarely found in modern software apart from Avid, these files can be difficult to play, even if your QuickTime installation contains the Avid-distributed codecs.

So how can you view these files for free?

Easy. Avid Meridien compression is actually MJPEG – Motion JPEG compression. The free and open-source utility FFmpeg has a sister player: FFplay. Even though it doesn’t know how to find an MJPEG codec inside an MXF OP1A wrapper, or an Avid OMF wrapper, you can tell it what to do with a simple command line. Then, you can view any Meridien-compressed MXF or OMF files on your drive.

As a guide, MXF video files are named in the following way:

CLIPnameVnn.<ID>_<ID>.mxf

ID” is a hexadecimal string that Avid uses to track the media. The pattern for OMF files is similar.

When the letter ‘V’ follows a clip name, and is succeeded by a pair of digits, you’ve found a video file. Then, the command to play it is:

ffplay -f mjpeg CLIPV01.<ID>_<ID>.MXF

The trick is the “-f mjpeg” in the command line. This forces FFplay to interpret the file as containing data encoded as Motion JPEG.

And now you can see your pictures. They’ll play with the VBI data included, and the colour range may appear washed out because you’re displaying broadcast-level pictures on a computer-level display.

Export Avid Mixdowns straight into FFmpeg

Avid’s export capabilities have been rough around the edges, to say the least, for some time, especially when trying to use QuickTime to encode into a non-native codec such as H.264. But you can now perform a mixdown within Avid to your favourite native Avid codec, and then use FFmpeg to combine those renders directly into a single multiplexed file suitable for burning to DVD or web upload. You don’t need to export from Avid itself.

Here’s how.

First, mark an in and and out point in your Avid timeline, and perform two mixdowns one after the other; one for video, one for audio. For your video mixdown, use the codec that appears most often in your sequence, or the codec you used for most or all of your renders.

Internally, your mixdowns will have been saved by your Avid as MXF files. For a sequence with stereo audio, there will be three files: one for video, and two for the audio (being the left and right channels).

At this point, you can close your Avid application, or use it for something else. The processing that follows takes place separately from the Avid program.

Use Windows Explorer, or the Mac Finder, to locate the most recently modified files in your “Avid MediaFiles” folders. In this example case, this was shown to me:

top-of-avidmediafiles

The last three files, with similar filenames, are the video file and the two audio files that were created during the mixdown. FFmpeg will now encode and multiplex these three files to produce what you need.

If you’d like to perform something else useful on the audio at this point, e.g. making its level meet the EBU R.128 loudness specification, that’s easy. First, I measured the volume:

ffmpeg -i "\Avid Mediafiles\MXF\1\LECTURE 25i 11JUNE2558B15BD.mxf" -i "\Avid MediaFiles\MXF\1\LECTURE 25i 11J558B15BD.1.mxf" -filter_complex "[0:a][1:a]amerge=inputs=2,ebur128=peak=true:framelog=verbose[aout]" -map "[aout]" -f null NULL

…and out came the result:

[Parsed_ebur128_1 @ 0000000000c7c3c0] Summary:

Integrated loudness:
I: -17.1 LUFS
Threshold: -27.6 LUFS

Loudness range:
LRA: 4.9 LU
Threshold: -37.7 LUFS
LRA low: -20.5 LUFS
LRA high: -15.6 LUFS

True peak:
Peak: 2.3 dBFS

So the audio gain in dB required is:

-23 - (-17.1) = -5.9

Also, check the “True peak” value. If it hits 0dB, you’ve probably got some clipping and may want to revise your audio mix. In any case, your broadcaster will want the True Peak to be below a specified value after your volume has been adjusted. In the case above, there is indeed a spike that is reconstructed by FFmpeg’s interpolation algorithm, but which was brought into specification by the volume attenuation in the next step below.

Next, in my example, I issued a command line to do several things:

  1. combine both audio streams into a single stereo stream,
  2. adjust the audio gain,
  3. encode the stereo audio stream,
  4. post-process the video stream to clean it up, and resize it for YouTube’s 480p format,
  5. encode the video stream,
  6. add some useful metadata to show decoders how to behave,
  7. multiplex the encoded streams together.

The constant-rate-factor value (-crf option), which determines the encoding quality, is deliberately kept high because online services such as YouTube and Vimeo always recode.

Here is the whole command line I just used to create a standard-definition YouTube upload, with colour and video levels correctly encoded into the bitstream:

ffmpeg -i "\Avid MediaFiles\MXF\1\LECTURE 25i 11JUNE2558B037E.mxf" -i "\Avid Mediafiles\MXF\1\LECTURE 25i 11JUNE2558B15BD.mxf" -i "\Avid MediaFiles\MXF\1\LECTURE 25i 11J558B15BD.1.mxf" -filter_complex "[1:a][2:a]amerge=inputs=2,volume=-5.9dB[aout];[0:v]hqdn3d,scale=854:480,smartblur=1.0:-1.0,setdar=16/9[vout]" -map "[vout]" -map "[aout]" -acodec libfdk_aac -vbr 5 -vcodec libx264 -aspect 16:9 -crf 18 -x264opts fullrange=off:transfer=bt470bg:colorprim=bt470bg:colormatrix=bt470bg -metadata title="Elsewhere and Otherwise" -metadata artist="Peter Messer" LECTURE.mkv

With just two quick command lines, my Avid mixdowns are ready for YouTube upload without having to rely on any of Avid’s strange interaction with QuickTime during export. And the encoding is much faster than Avid can manage.

Diplacusis — or why do some people hate violins?

tl;dr — I had very disturbing diplacusis (double hearing) during a really bad bout of influenza, but recovered after a month.

The Diplacusis Diary

Being a Tonmeister, and loving music all my life, I didn’t understand what drove some people, even those in my family, to dislike violins. Where I enjoyed beautiful, warm, expressive singing tone, they heard “tuneless cats wailing” or worse.

Whereas the main complainant among my relatives didn’t seem to mind piano music too much, orchestras and violins in particular were, to her, the equivalent of a knife edge being dragged squealing across a china plate.

How could there be such a difference?

Until last month, I had no idea. But now I know.

For three weeks, my right ear has presented me with hideously detuned ghost orchestras, squawking organ pipes, shrieking violins and cracked bells. Music encoded using codecs such as MP3 or AAC sounded like it was being played through loudspeakers whose cones had been torn apart, and any perception of stereo was lost: everything was shifted about 40° to the left, while demonic pitchless musicians wailed over my right shoulder. In short, all pleasure in music was replaced by agony, and my work as a performing musician, occasional record producer and film editor appeared finished.

This is an essay on the ailment diplacusis, and my journey to safety through it. To be more accurate, my particular case was diplacusis dysharmonica, where pitch is perceived normally in one ear, but wrongly in the other. This article is no substitute for a professional diagnosis and a course of therapy from a medical specialist, but it is published to show how a musician and amateur physicist (me) worked through the nightmare, and was healed by the brain and body’s own resources.

Yes, I’m better now and, indeed, most people recover without intervention. But, if you have begun a similar journey, please get checked by the best professional you can find because many different causes lead to the same ailment. Most triggers that the body can’t fix on its own can be cured by pharmaceutical or surgical intervention. Please don’t hesitate.

Where did it start?

I have normal hearing for a 51-year-old, gracefully growing older. There’s a little high-frequency tinnitus but nothing to worry about. Then, in May 2015 began my worst bout of influenza ever. This brought about the kind of coughing and congestion that kills older people.

While blowing my nose rather fiercely, I felt and heard something nasty, probably mucus, shoot up my right Eustachian tube and into my middle ear. Or perhaps too much pressure was used and something inside my middle ear became damaged?

Immediately, I felt a sense of pressure as if my ear needed to ‘pop’ and, as usual, there was a dullness of hearing. This is perfectly normal when the pressures either side of the tympanum are unequal. But also, there was a new acoustic effect, as if my eardrum were in direct physical contact with my throat. Breathing and swallowing became much louder than usual in this ear alone. And popping my ears to relieve pressure changed none of this.

So, in the matter of a very short space of time, I had an ear that felt completely full of something, and that would not respond to the normal procedures. The next day, I was checked by a doctor who wanted me to visit the audiology department at the hospital if things weren’t getting better. The tympanum is translucent, and an expert can diagnose much by shining bright light onto it.

What did I notice?

Day three dawned. Outside my house, off to the right from where I sit for my everyday work, there is a church. The bell, which was being tolled to call the congregation for the morning service, had developed a problem. It sounded as if it been cracked, which was a pity because its sound was normally very pleasant, a reminder that this is a historic and pretty town. Later that day, there was space in the diary to visit the vicar to tell him about the sad accident that had happened in his bell tower in case he’d not noticed.

Then it was time to edit and master some music for a client. Despite the feeling of pressure in the right ear, sensitivity had returned so I fearlessly began work.

The first piece of music wasn’t from the usual excellent producer whose work normally went into this particular project and the difference certainly showed! The whole choir was way off to the left in the stereo soundstage, and the MP4 audio file sounded terribly distorted, as if encoded at a very low bitrate. The right hand channel, particularly, had incredible harmonic distortion and countless intermodulation products. I very nearly fired off a cheery email to my friend who usually provides this material, saying “it’s easy to tell this isn’t from you!”

Then I glanced at the meters and the waveform. The audio was in dual-channel mono. In other words, both audio streams were identical and panned dead centre. What on EARTH was I hearing? Were my speakers or amplifier blown?

Into a separately amplified output, my headphones were plugged. The sound was just as awful. But then the real horror began: turning the cans the other way around, the balance and wild distortion inside my head were identical, as if I’d not reversed the headphones at all.

So I checked just the left channel: and it was perfect. But with the right channel alone, not only was the sound like someone singing through a comb and paper, it was nearly a semitone sharp! The vocal timbre also sounded sped up, like a tape being played through a pitch shifter.

A first response

This was deeply unpleasant. “I’m broken!” was the first thought. After a lifetime of playing and loving music, and wondering why my mother didn’t like musical sounds at all, suddenly all my own pleasure in music was lost. The glory of stereo, “sound sculpted in space”, had gone. I could no longer tell if an instrument or singer was in tune. And judgement on matters of tonal balance was impossible.

Every day in the press, we read about people whose lives have been utterly ruined by accidents. Losing part of one ear is hardly equivalent to being crippled and confined to a wheelchair for ever. And if a person suddenly disabled can find a way through, it wouldn’t be too much trouble for me with one-and-a-half ears and all my limbs still working.

A bit sad for a musician and producer, though — the end of my lifetime’s ambition.

That afternoon, I played piano for a rehearsal. The whole echo of the church appeared routed through a pitch-shifter and screamed mockingly at me like a choir in the worst kind of horror movie.

Analysis

So, that evening, there was time to analyse what was happening.

Speech? All sibilants on the left, and sounding sped-up in the right ear alone.

Sine waves? Fine up to about 2kHz, then bad intermodulation distortion when feed to both ears: and pitch shift above 2kHz in the right ear alone.

Playing the piano? Everything an octave above Middle C and higher was surrounded by a vile cluster of discordant tones.

What about fun with heavily-panned Beatles’ songs, where the vocals or an instrument are fully on one stereo channel or the other? The trumpet solo in “Penny Lane” was unlistenable in part, though the brain did a good job of pulling some of it back into pitch on its lower notes. Over this, I had no conscious control: it was rather like watching a remotely controlled machine at work.

The Nat ‘King’ Cole album “Welcome To The Club” has the vocals bizarrely panned entirely on one channel. You can see where I’m going with this! And, yes, he was singing a semitone sharp. So was my enjoyment of music and my professional judgement over for life?

Over the week that followed, experiments continued. Every morning I’d be woken by the church clock chiming with all its harmonics in the wrong pitch (though the fundamental tone was fine), then I’d try the piano: there were clusters of evil upper partials on every note, and harmonies brought no pleasure or contrast. And recorded music encoded with perceptual codecs still sounded as if played through a class B amplifier with terrible crossover distortion.

Thinking in Physics

What might have been happening inside my ear? The feeling of pressure was still there, and everything above about 1.5kHz was pitch-shifted up.

If the workings of the ear are unknown to you, I suggest that, at this point, you take a look at some Wikipedia entries particularly regarding the tympanum, the ossicles, the cochlea and the organ of Corti. Remember how standing waves are set up along the basilar membrane, turning it into a spectrum analyser.

If you have access to a tone generator, try this: feed 2kHz or 3kHz into headphones, then clench your jaw strongly. Did you hear the pitch of the tone go up? Is the pressure on your ear affecting the bone holding your cochlea and therefore changing its shape, altering the places along the basilar membrane where different frequencies resonate, thereby fooling the brain into perceiving a different pitch?

Maybe something, maybe mucus, was putting pressure constantly on my cochlea, possibly on its oval window, permanently changing the places where resonance occurs when frequencies are higher than about 1.5kHz? This is in line with the place theory of pitch perception.

And perhaps the audio that is normally heavily modified by the MP3 or AAC algorithms, disguised by the normal ear’s processes, is revealed in all its distortion by my suddenly revelatory but damaged cochlea? In other words, the spectral lines that these codecs decide to distort, lost in the ear’s usual perception, are shown in all their awfulness now that they are shifted for the benefit of my aural education.

How to fix my ear?

So at this point, about two weeks before writing this essay, I resolved to get through this in several ways.

  1. Using commonly available open source software, I could have found where the frequency break in my damaged ear was, and design a process that maps frequencies above this frequency to slightly lower frequencies, thus restoring normal pitch perception for headphone use. Perhaps even a digital hearing-aid like this is possible?
  2. Middle ear infections cause pressure in the middle ear, so I was ready to do all that is possible to detect and clear any infection.
  3. I still had influenza and was very congested: so it would have been useful to keep using Olbas Oil and pseudoephedrine to clear any other sinus and Eustachian tube blockages.
  4. Retrain my brain regarding pitch. After all, as a baby, only after birth could the already-formed brain have been able to compare pitch sensations generated by the two ears and, somehow, co-relate them — so why not try to restart the process?

The strong upper harmonics in violins and pipe organs howled violently in my right ear: and, if my family member who hated such instruments also had unresolved diplacusis, perhaps this was the reason for her dislike of such sounds?

Cured

Now, the good news, for me at least. My ear has become decongested in the last week, and the shrill demonic orchestra and choir has faded to almost nothing. My stereo hearing is now back to its normal clean status, and music is a constant pleasure. I didn’t need to make my own hearing-aid, the decongestants seemed to work, and my self-training with tones and careful music listening perhaps helped too.

Sometimes, diplacusis can be healed in this way by the body and brain’s own natural functions. This has taken about a month for me.

If you have just experienced the very disturbing onset of diplacusis, maybe this essay has given you hope? But please get to a hearing specialist as soon as you can, in case your situation is different from mine, and you need surgical intervention.

And never blow your nose too hard.

Automation—The Radio Authority Spoke Out

As increasing numbers of British independent radio stations use greater amounts of automation, or voice tracking, I was amused to read what the Radio Authority (Ofcom‘s predecessor) decided about automation only fifteen years ago.

Today’s radio market is, of course, very different. Intensity of competition, especially from non-radio sources of entertainment, is a far greater challenge compared to what it was at the opening of this century.

Who knows how the market might change in the face of the post-dotcom generation? Will leadership in forming taste become too fragmented, and the vacuum need filling? In music radio, could real presenters, experiencing minute-by-minute the music they are playing, be valued once more?

For another perspective, see the foot of this post.

♫ “You’ve yet to have your finest hour.” ♪

Click here to read the entire set of Radio Authority minutes.

Automation

Following on from the discussion at the September meeting, Members decided to set a limit on the amount of automated programming to be generally allowed in daytime on local radio stations. The general limits would be two hours a day on FM stations, and four hours a day on those AM stations which are obliged to broadcast twelve hours or more of locally produced and presented programming. Staff could negotiate different limits, on request, in accordance to specified criteria. The stations would each be written to setting out the limits and giving an opportunity for representations to be made in respect of individual formats.

This decision on automation was not made on the basis that automated programming was necessarily undesirable. On the contrary, Members recognised that automation had a valuable part to play, particularly for overnight programming and for specialist shows.

However, they felt that the “localness” of stations would be jeopardised if programming were allowed to be automated for more than limited periods during the day and also the “liveness” of the medium, a feature of radio highlighted by the Authority in its White Paper submission in July 2000.

They also considered that listeners have a reasonable expectation for presentation to be live, and that too high a level of automation could undermine the trust that exists between the station and its audience. This in turn would affect both the quality of programme output and the reputation of the industry as a whole.

It was a fundamental part of the Authority’s statutory duties not to permit such a situation to occur. Consequently, the Authority decided to impose constraints on the amount of daytime automated programming without prior consent.

Meanwhile, Andrew Gray writes about the same topic HERE.