BU joins the InterNet...
A late report on the Sheffield -- RFI
A late report on the Sheffield -- RFI (2)
"Friendly" missiles and computer error -- more on the Exocet
Re: "Friendly" missiles and computer error - more on the Exocet
License Plate Risks
Laserprinter dangers
Report from AAAI-86
Wrongful eviction through computer error
More Aviation Hearsay?
The software that worked too well
 

----------------------------------------------------------------------

Date: Thu, 17 Apr 86 13:27:24 EST
From: bzs@bu-cs (Barry Shein)
To: risks@sri-csl
Subject: BU joins the InterNet...

I may as well tell this anecdote before others do...

Boston University this past week submitted their host table for inclusion in
the NIC table. Unfortunately, there were a few entries in the table that
should never had made it. The most interesting was a one character nickname
("A") for host BU-CS (local convenience.)

Apparently a bug in the 4.2bsd program htable program which converts from
standard NIC format to the format UNIX uses proceeded to fill your disk when
it hit this entry. I suspect from the notes that some hosts must pick up the
table automatically in the wee hours and do the conversion with a command
script so they came in the next morning with a disk full of the string
"BUCSA". I was assured by one site that he no longer needs any mnemonics to
remember our name. I have no way of knowing numbers, but apparently some
number of machines went down or were crippled.

In addition, there was an entry for a machine type "3B2", htable broke on
that also although not so dramatically, because the string started with a
digit. It seems the next night or so htables were breaking again because
someone managed to put a lower case letter into the table.  (I have heard
this only second hand.)

I then fixed our host table to avoid the troubles and ran it through htable
myself just to be sure and it promptly deleted the first entry in my table.
Apparently it had to have at least one blank line before the first entry,
again, without warning.

This is after almost three years of the program being in production at
probably thousands of sites. Don't trust any program over 30 (lines of code)?

 -Barry Shein, Boston University

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: 16-May-1986 1241
From: minow%pauper.DEC@decwrl.DEC.COM  (Martin Minow, DECtalk Engineering ML3-1/U47 223-9922)
To: risks@sri-csl.ARPA
Subject: A late report on the Sheffield -- RFI

  [PGN's SUMMARY LIST OF HORROR STORIES CONTAINS THIS ON THE SHEFFIELD:
  "Exocet missile not on expected-missile list, detected as friend" (SEN 8 3)
   [see Sheffield sinking, reported in New Scientist 97, p. 353, 2/10/83];
   Officially denied by British Minister of Defence Peter Blaker
   [New Scientist, vol 97, page 502, 24 Feb 83].  Rather, sinking abetted by
   defensive equipment being turned off to reduce communication interference?]

From the Boston Globe, May 16, 1986:

  Phone call jammed antimissile defenses

LONDON -- Electronic antimissile defenses on the British frigate Sheffield,
sunk in the 1982 Falklands conflict, were jammed during an Argentine attack
by a telephone call from the captain to naval headquarters, the Defense
Ministry said yesterday.  Twenty crewmen were killed when the Sheffield was
sunk May 4, 1982, by a French-made Exocet missile fired by an Argentine
plane.  A Defense Ministry spokesman, confirming a report in [the] London
Daily Mirror, said Commodore James Salt, the Sheffield's captain, was making
"an urgent operational call" to naval headquarters near London when the
missile hit.  "The electronic countermeasures equipment was affected by the
transmission.  Steps have been taken to avoid a repetition," the spokesman
said.  Commodore Salt now has a shore job as chief of staff to the fleet
commander-in-chief. (AP)

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: Fri, 16 May 86 17:13 PDT
From: <Dave-Platt%LADC@HI-MULTICS.ARPA>
To: Risks@SRI-CSL.ARPA
Subject: A late report on the Sheffield -- RFI

[beginning of message duplicated the above] From Today's LA TIMES: [...]

  The telephone system's transmitter was on the same frequency as the homing
  radar of the French-built Exocet missile fired at the Sheffield, and the
  transmission prevented the Sheffield's electronic countermeasures equipment
  from detecting the missile's radar and taking evasive action.

The article implies that this situation might have been avoided had the
Sheffield been equipped with an uplink into the British satellite
communication system; the article gives no details but I'd guess that such
an uplink would have used a transmitter which was (a) less powerful, (b)
more directional, or (c) on a completely different wavelength.

Does anyone have additional information about the equipment in question?
      [Dave Platt]

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: Thu, 25 Sep 1986  21:23 EDT
From: Rob MacLachlan <RAM@C.CS.CMU.EDU>
To:   risks@CSL.SRI.COM
Subject: "Friendly" missiles and computer error -- more on the Exocet

   [We have been around on this case in the past, with the "friendly" theory
    having been officially denied. This is the current item in my summary list:
       !!$ Sheffield sunk during Falklands war, 20 killed.  Call to London
           jammed antimissile defenses.  Exocet on same frequency.
           [AP 16 May 86](SEN 11 3)
    However, there is enough new material in this message to go at it once
    again!  But, please reread RISKS-2.53 before responding to this.  PGN]

    I recently read a book about electronic warfare which had some
things to say about the Falklands war incident of the sinking of the
Sheffield by an Exocet missile.  This has been attributed to a
"computer error" on the part of a computer which "thought the missile
was friendly."  My conclusions are that:
 1] Although a system involving a computer didn't do what what one
    might like it to do, I don't think that the failure can reasonably
    be called a "computer error".
 2] If the system had functioned in an ideal fashion, it would
    probably have had no effect on the outcome.

The chronology is roughly as follows:

The Sheffield was one of several ships on picket duty, preventing
anyone from sneaking up on the fleet.  It had all transmitters
(including radar) off because it was communicating with a satellite.

Two Argentinan planes were detected by another ship's radar.  They
first appeared a few miles out because they had previously been flying
too low to be detected.  The planes briefly activated their radars,
then turned around and went home.

Two minutes later a lookout on the Sheffield saw the missile's flare
approaching.  Four seconds later, the missile hit.  The ship eventually
sank, since salvage efforts were hindered by uncontrollable fires.

What actually happened is that the planes popped up so that the could
acquire targets on their radars, then launched Exocet missiles and
left.  (The Exocet is an example of a "Fire and Forget" weapon.  Moral
or not, they work.)  The British didn't recognize that they had been
attacked, since they believed that the Argentinans didn't know how to
use their Exocet missiles.

It is irrelevent that the Sheffield had its radar off, since the
missile skims just above the water, making it virtually undetectable
by radar.  For most of the flight, it proceeds by internal guidance,
emitting no telltale radar signals.  About 20 seconds before the end
of the flight, it turns on a terminal homing radar which guides it
directly to the target.  The Sheffield was equipped with an ESM
receiver, whose main purpose is to detect hostile radar transmissions.

The ESM receiver can be preset to sound an alarm when any of a small
number of characteristic radar signals are received.  Evidently the
Exocet homing radar was not among these presets, since there would
have been a warning 20 sec before impact.  In any case, the ESM
receiver didn't "think the missile was friendly", it just hadn't been
told it was hostile.  It should be noted that British ships which were
actually present in the Falklands were equipped with a shipboard
version of the Exocet.

If the failure was as deduced above, then the ESM receiver behaved
exactly as designed.  It is also hard to conceive of a design change
which would have changed the outcome.  The ESM receiver had no range
information, and thus was incapable of concluding "anything coming
toward me is hostile", even supposing the probably rather feeble
computer in the ESM receiver were cable of such intelligence.

In any case, it is basically irrelevant that the ESM receiver didn't
do what it might have done, since by 20 seconds before impact it was
too late.  The Sheffield had no "active kill" capability effective
against a missile.  Its anti-aircraft guns were incapable of shooting
down a tiny target skimming the water at near the speed of sound.

It is also poossible to cause a missile to miss by jamming its radar,
but the Sheffield's jamming equipment was old and oriented toward
jamming russian radars, rather than smart western radars which
wheren't even designed when the Sheffield was built.  The Exocet has a
large bag of tricks for defeating jammers, such as homing in on the
jamming signal.

In fact, the only effective defense against the Exocet which was
available was chaff: a rocket dispersed cloud of metalized plastic
threads which confuses radars.  To be effective, chaff must be
dispersed as soon as possible, preferably before the attack starts.
After the Sheffield, the British were familiar with the Argentinan
attack tactics, and could launch chaff as soon as they detected the
aircraft on their radars.  This defense was mostly effective.

Ultimately the only significant mistake was the belief that the
Argentinans wouldn't use Exocet missiles.  If this possibility was
seriously analysed, then the original attack might have been
recognized.  The British were wrong, and ended up learning the hard
way.  Surprise conclusion: mistakes can be deadly; mistakes in war are
usually deadly.

I think that the most significant "risk" revealed by this event is
tendency to attribute the failure of any system which includes a
computer (such as the British Navy) to "computer error".

----------------------------------------------------------------------
----------------------------------------------------------------------

From: Robert Stroud <robert%cheviot.newcastle.ac.uk@Cs.Ucl.AC.UK>
Date: Tue, 30 Sep 86 14:43:24 gmt
To: risks@csl.sri.com
Subject: Re: "Friendly" missiles and computer error - more on the Exocet

There is a very interesting BBC TV documentary in the Horizon series called
"In the wake of HMS Sheffield" which is well worth seeing if you get the
chance. It discusses the failures in technology during the Falklands war
and the lessons which have been learnt from them, and includes interviews
with participants on both sides.

Naturally the fate of HMS Sheffield features prominently, and the chronology
given by Rob MacLachlan matches the program in most respects. However, I'm
afraid it says nothing about the Exocet homing signal being friendly - I was
specifically looking out for this. Instead, according to the documentary, the
device which should have detected the homing signal is situated next to the
satellite transmission device and was simply swamped by the signal from a
telephone call to London in progress at the time - this backs up Peter's
definitive account.

A couple of other points from the documentary are worth mentioning. Chaff
was indeed effective in helping one ship avoid an Exocet (I forget which one)
but it is by no means fool proof. The fuse needs to be set manually on deck
and must be exact, taking into account lots of factors like wind direction,
ship's course, distance from missile, etc. If you get it wrong, the distraction
comes too early or too late. There was a nice piece of computer graphics
showing the difference half a second could make - needless to say, they are
working on an automatic fuse!

The Argentinian planes were able to avoid radar detection using a technique
called "pecking the lobes". Basically they exploit the shape of the radar
cone and the curvature of the earth by flying level until they detect
a radar signal, then losing height and repeating the process. As Rob said,
they only need to rise up high enough to be detected at the last minute
when they fire the Exocets and turn for home - even this trace would only
be visible very briefly on the radar display and could easily be missed.
Thereafter the Exocets are silent until the last few seconds when they
lock onto the target to make last minute course corrections.

This problem has been dealt with by building radar devices that can be used
from helicopters several thousand feet up so they can see further over the
horizon.

There was also a discussion about whether it would be feasible to install
anti-missile weapons in cargo ships such as the Atlantic Conveyor (sunk
twice by the Argentinians with Exocets who mistook it for one of the aircraft
carriers). Apparently, installing a weapon would be possible, but to be
effective it would need all the command & control computer systems as well to
keep track of everything else that was going on, and that would not be
feasible.

Robert Stroud, Computing Laboratory, University of Newcastle upon Tyne.

ARPA robert%cheviot.newcastle@cs.ucl.ac.uk (or ucl-cs.ARPA)
UUCP ...!ukc!cheviot!robert

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: Mon, 23 Jun 86 09:56:05 pdt
From: price@src.DEC.COM (Chuck Price)
To: RISKS@SRI-CSL.ARPA
Subject: License Plate Risks

I heard the following tale on KCBS this morning.  [I intersperse a few
details from the SF Chron, 23 Jun 86.  PGN]

It seems that this fellow [Robert Barbour] desired personalized license
plates for his car.  Since he loved sailing, he applied for ``SAILING'' and
``BOATING'' as his first two choices [seven years ago]. He couldn't think of
a third name of NAUTICAL intent, so he wrote ``NO PLATE'' in as his third
choice.

You guessed it. He got ``NO PLATE''.

A week or so later, he received his first parking ticket in the mail.  This
was followed by more and more tickets, from all over the state [2500 in
all!].  It seems that when a police officer writes a parking ticket for a
car with no license plates, he writes ``NO PLATE'' on the ticket.

Our friend took his problem to the DMV, which informed him that he should
change his plates.

The DMV also changed their procedures. They now instruct officers to write
the word ``NONE'' on the unplated parking tickets.

Wonder who's gonna get those tickets now?

-chuck price

     [Obviously some poor sap whose license plate says ``NONE''!]

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: Mon 31 Jul 86 17:38:10 N
To:  risks@sri-csl.arpa
From:    <MANSFIEL%DHDEMBL5.BITNET@WISCVM.ARPA>
Organisation:   European Molecular Biology Laboratory
Postal-address: Meyerhofstrasse 1, 6900 Heidelberg, W. Germany
Phone:          (6221)387-0 [switchboard]
Subject: Laserprinter dangers

Increasingly, large and "official" organisations such as motor vehicle tax
offices, insurance companies, etc. are using laser printers to print the
bills and other requests for money which are sent to customers. Whereas
previously pre-printed letterheads (often with several and or coloured inks)
were used, now the laser printer is relied on to print the letterhead
itself, so that plain paper can be used.

It is probably only a matter of time before some clever person prints off a
batch that looks fine but that have the c.d.'s own account number (or some
other slightly safer one) on them, sends them out, and gets lots of money.

There must be lots of other forgery and swindling possibilities with laser
printers.  Have any frauds of this type have actually been committed?

   [Most banks no longer make blank deposit slips routinely available, after
    various episodes of people magnetically coding account numbers onto the
    blanks and leaving these slips in the stack of blanks.  Spoofing of
    letterheads is of course relatively easy with laser printers, but also
    with many of the electronic mailers around the net.  PGN]

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: Fri, 22 Aug 86 13:05:57 CDT
Received: by banzai-inst (1.1/STP) id AA03138; Fri, 22 Aug 86 13:05:57 CDT
To: risks@csl.sri.com
Subject: Report from AAAI-86    [Really from Alan Wexelblat]

I just got back from a week at AAAI-86.  One thing that might interest
RISKS readers was the booth run by Computer Professionals for Social
Responsibility (CPSR).  They were engaged in a valiant  (but ineffectual)
effort to get the AI mad-scientist types to realize what some of their
systems are going to be doing (guiding tanks, cruise missiles, etc.).

They were handing out some interesting stuff, including stickers that said
(superimposed over a mushroom cloud):  "It's 11 p.m.  Do you know what your
expert system just inferred?"

They also had a series of question-answer cards titled "It's Not Trivial."
Some of them deal with things that have come up in RISKS before.  [I left
them in for the sake of our newer readers.  PGN]    They are:

Q1:  How often do attempts to remove program errors in fact introduce one
 or more additional errors?

A1:  The probability of such an occurance varies, but estimates range from
 15 to 50 percent (E.N. Adams, "Optimizing Preventing Service of
 Software Products," _IBM Journal of Research and Development_,
 Volume 28(1), January 1984, page 8)

Q2:  True or False:  Experience with large control programs (100,000 < x <
 2,000,000 lines) suggests that the chance of introducing a severe
 error during the correction of original errors is large enough that
 only a small fraction of the original errors should be corrected.

A2:  True. (Adams, page 12)

Q3:  What percentage of federal support for academic Computer Science
 research is funded through the Department of Defense?

A3:  About 60% in 1984.  (Clark Thompson, "Federal Support of Academic
 Research in Computer Science," Computer Science Division, University
 of California, Berkeley, 1984)

Q4:  What fraction of the U.S. science budget is devoted to defense-related
 R&D in the Reagan 1985/86 budget?

A4:  72%  ("Science and the Citizen,"  _Scientific American_ 252:6 (June
 1985), page 64)

Q5:  The Space Shuttle Ground Processing System, with over 1/2 million lines
 of code, is one of the largest real-time systems ever developed.
 The stable release version underwent 2177 hours of simulation
 testing and the 280 hours of actual use during the third shuttle
 mission.  How many critical, major, and minor errors were found
 during testing?  During the mission?

A5:    Critical Major Minor
     Testing    3    76  128
     Mission    1     3   20
 (Misra, "Software Reliability Analysis," _IBM Sys. J. 1983, 22(3) )

Q6:  How large would "Star Wars" software be?

A6:  6 to 10 million lines of code, or 12 to 20 times the size of the Space
 Shuttle Ground Processing System.  (Fletcher Report, Part 5, page 45)

The World Wide Military Command and Control System (WWMCCS) is used by
civilian and military authorities to communicate with U.S. military forces
in the field.

Q7:  In November 1978, a power failure interrupted communications between
 WWMCCS computers in Washington, D.C. and Florida.  When power was
 restored, the Washington computer was unable to reconnect to the
 Florida computer.  Why?

A7:  No one had anticipated a need for the same computer (ie the one in
 Washington) to sign on twice.  Human operators had to find a way to
 bypass normal operating procedures before being able to restore
 communications.  (William Broad, "Computers and the U.S. Military
 Don't Mix," _Science_ Volume 207, 14 March 1980, page 1183)

Q8:  During a 1977 exercise in which WWMCCS was connected to the command and
 control systems of several regional American commands, what was the
 average success rate in message transmission?

A8:  38%  (Broad, page 1184)

Q9:  How much will the average American household spend in taxes on the
 military alone in the coming year?

A9:  $3,400 (Guide to the Military Budget, SANE)

[question 10 is unrelated to RISKS]

Q11: True or False?  Computer programs prepared independently from the same
 specification will fail independently.

A11: False.  In one experiment, 27 independently-prepared versions, each
 with reliability of more than 99%, were subjected to one million
 test cases.  There were over 500 instances of two versions failing
 on the same test case.  There were two test cases in which 8 of the
 27 versions failed.  (Knight, Leveson and StJean, "A Large-Scale
 Experiment in N-Version Programming,"  Fault-Tolerant Computing
 Systems Conference 15)

Q12: How, in a quintuply-redundant computer system, did a software error
 cause the first Space Shuttle mission to be delayed 24 hours only
 minutes before launch?

A12: The error affected the synchronization initialization among the 5
 computers.  It was a 1-in-67 probability involving a queue that
 wasn't empty when it should have been and the modeling of past
 and future time.  (J.R. Garman, "The Bug Heard 'Round the World,"
 _Software Engineering Notes_ Volume 6 #5, October 1981, pages 3-10)

Q13: How did a programming punctuation error lead to the loss of a Mariner
 probe to Venus?

A13: In a FORTRAN program, DO 3 I = 1,3 was mistyped as DO 3 I = 1.3 which
 was accepted by the compiler as assigning 1.3 to the variable DO3I.
 (_Annals of the History of Computing_, 1984, 6(1), page 6)

Q14: Why did the splashdown of the Gemini V orbiter miss its landing point
 by 100 miles?

A14: Because its guidance program ignored the motion of the earth around
 the sun. (Joseph Fox, _Software and its Development_, Prentice Hall,
 1982, pages 187-188)

[Questions 15-17 are not RISKS related]

Q18: True or False?  The rising of the moon was once interpreted by the
 Ballistic Missile Early Warning System as a missile attack on the US.

A18: True, in 1960.  (J.C. Licklider, "Underestimates and Overexpectations,"
 in _ABM: An Evaluation of the Decision to Deploy and Anti-Ballistic
 Missile_, Abram Chayes and Jerome Wiesner (eds), Harper and Row,
 1969, pages 122-123)

[question 19 is about the 1980 Arpanet collapse, which RISKS has discussed]

Q20: How did the Vancouver Stock Exchange index gain 574.081 points while
 the stock prices were unchanged?

A20: The stock index was calculated to four decimal places, but truncated
 (not rounded) to three.  It was recomputed with each trade, some
 3000 each day.  The result was a loss of an index point a day, or
 20 points a month.  On Friday, November 25, 1983, the index stood
 at 524.811.  After incorporating three weeks of work for consultants
 from Toronto and California computing the proper corrections for 22
 months of compounded error, the index began Monday morning at
 1098.892, up 574.081.  (Toronto Star, 29 November 1983)

Q21: How did a programming error cause the calculated ability of five
 nuclear reactors to withstand earthquakes to be overestimated, and
 the plants to be shut down temporarily?

A21: A program used in their design used an arithmetic sum of variables when
 it should have used the sum of their absolute values.  (Evars Witt,
 "The Little Computer and the Big Problem,"  AP Newswire, 16 March
 1979.  See also Peter Neumann, "An Editorial on Software Correctness
 and the Social Process,"  _Software Engineering Notes_, Volume 4(2),
 April 1979, page 3)

Q22: The U.S. spy ship Liberty was attacked in Israeli waters on June 8,
 1967.  Why was it there in spite of repeated orders from the U.S.
 Navy to withdraw?

A22: In what a Congressional committee later called "one of the most
 incredible failures of communications on the history of the
 Department of Defense," none of the three warnings sent by three
 different communications media ever reached the Liberty.  (James
 Bamford, _The Puzzle Palace_, Penguin Books, 1983, page 283)

Q23: AEGIS is a battle management system designed to track hundreds of
 airborne objects in a 300 km radius and allocate weapons sufficient
 to destroy about 20 targets within the range of its defensive
 missiles.  In its first operational test in April 1983, it was
 presented with a threat much smaller than its design limit:  there
 were never more than three targets presented simultaneously.  What
 were the results?

A23: AEGIS failed to shoot down six out of seventeen targets due to system
 failures later associated with faulty software.  (Admiral James
 Watkins, Chief of Naval Operations and Vice Admiral Robert Walters,
 Deputy Chief of Naval Operations.  Department of Defense
 Authorization for Appropriations for FY 1985.  Hearings before the
 Senate Committee on Armed Services, pages 4337 and 4379.)

Well, this message is long enough; I'll hold off on my personal commentaries.
People wanting more information can either check this sources given or
contact CPSR at P.O. Box 717, Palo Alto, CA  94301.

--Alan Wexelblat
ARPA: WEX@MCC.ARPA or WEX@MCC.COM
UUCP: {ihnp4, seismo, harvard, gatech, pyramid}!ut-sally!im4u!milano!wex

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: Thu, 2 Oct 86 19:10:10 CDT
From: Bill Janssen <janssen@mcc.com>
Subject: Wrongful eviction through computer error
To: risks@sri-csl.arpa

An interesting thing happened to me last month.  I got home on the 5th of
September to find an eviction notice on my living room floor.  Something
about not paying my rent.  Well, I gathered up the checks and went over to
the office.  Turns out the problem was that I had already paid for October,
as well as September, and the apartment management folks had just switched
to a new computer system! There must have been a line in it something like

 if (last_month_paid_for != this_month
     AND day == trigger_day_for_eviction)

  issue_eviction_notice();

According to some of the office staff, 11 other people had already
been in with similar complaints.

Bill Janssen, MCC Software Technology, 9430 Research Blvd, Austin, Texas  78759
 UUCP:  {ihnp4,seismo,harvard,gatech,pyramid}!ut-sally!im4u!milano!janssen

----------------------------------------------------------------------
----------------------------------------------------------------------

From: mnetor!spectrix!clewis@seismo.CSS.GOV
To: Neumann@csl.sri.com
Subject: [More Aviation Hearsay?]
Date: Wed Oct  8 12:04:57 1986
ReSent-To: RISKS@CSL.SRI.COM

I understand and appreciate your comments in the mod.risks about nth party/
hearsay stuff.  But, from the examples you gave, in case you are really
looking for some aviation accidents partially due to obedience to the
"book", here are two - both commercial accidents at Toronto International
(Now Pearson International).  Both from MOT (then DOT) accident
investigations:

About 15 years ago a Air Canada DC-8 was coming in for a landing.  At
60 feet above the runway, the pilot asked the co-pilot to "arm" the spoilers.
The co-pilot goofed and fired them.  The aircraft dropped abruptly onto
the runway, pulling about 4 G's on impact.  At which point one of the
engine/pylon assembly tore away from the wing - this was an aircraft
defect because the engines were supposed to withstand this impact - a
6 G impact is supposed to shear the mounting pins.  Not aware of this
fact, the pilot performed what the book told him to do - go around for
another try.  He only made it halfway around - the pylon had tore away
a portion of the fuel tank and the aircraft caught fire and crashed in
a farmer's field killing all aboard.

In retrospect, the pilot should have stayed on the ground, contrary
to the book.  Many would have survived the fire on the ground.  However,
it was difficult to see how the flight crew could have realized that
the aircraft was damaged as it was in the short time that they had to
decide.  The spoiler arming system was altered to make this more unlikely.

The second incident was about 8 years ago - on a Air Canada DC-9 taking
off.  During take off one of the tires blew throwing rubber fragments
through one of the engines.  One of these fragments damaged a temperature
sensor in the engine, causing an "engine fire" indication to come on in
the cockpit.  The pilot did what the book said, "abort takeoff", even
though he was beyond the safe stopping point.  The aircraft slid off the
end of the runway and into the now infamous 50 foot deep gully between
the runway and the 401 highway.  The fuselage broke in 2 places, causing
one death and several broken bones and minor back injuries.

In retrospect, if the pilot had not aborted takeoff, he would have been
able to take off successfully and come around for reasonably safe landing,
saving the aircraft and preventing any injuries.  However, there was
absolutely no way that they could have determined that the engine was not
on fire.

Results:
    - in spite of the findings, I seem to remember that the pilot was
      suspended for some time.
    - Recommendations:
        - filling in the gully - not done
 - cutting grooves in the runways for improved braking - not done yet,
   but the media is still pushing the MOT.  (I'm neutral on this one,
   the MOT has some good reasons for not doing it)
 - cleaning the tarmac of burned rubber - only done once if I recall
   correctly.

As a counter example, I offer you another:

It had become common practise for twin-otter pilots to place the props
in full reverse pitch while landing, instants before actually touching down.
This had the effect of shortening the landing run considerably over the
already short runs (twin-otter is STOL).  However, due to a number of
accidents being traced to pilots doing this too soon - eg: 50 feet up,
the aircraft manufacturer then modified the aircraft so as to prevent
reverse pitch unless the aircraft was actually on the ground.

(The above, however, is from a newspaper, and would bear closer research).

----------------------------------------------------------------------
----------------------------------------------------------------------

Date: Wed, 29 Oct 86 17:31:59 pst
From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
To: risks%csl.sri.com@RELAY.CS.NET
Subject: The software that worked too well

This story is nth hand, thus to be classified as rumor.  But it is
relevant to RISKS, so I pass it on, if only as a parable.

SeaTac is the main Seattle-area airport.  Ordinarily aircraft landings are
from the north, and this end of the runway is equipped with all the sensing
equipment necessary to do ALS (Automatic Landing System) approaches.

The early 747 ALS worked beautifully, and the first of these multi-centaton
aircraft set down exactly at the spot in the center of the runway that the
ALS was heading for.  The second 747 set down there.  The third 747 landed
on this part of the runway. ... As did all the others.

After a while, SeaTac personnel noticed that the concrete at this point at
the north end of the ALS runway was breaking up under the repeated impact of
747 landings.  So the sofware was modified so that 3 miles out on the
approach, a random number generator is consulted to choose a landing spot --
a little long, a little short, a little to the left or a little to the right.

   THE MORAL:
   Don't assume you understand the universe without actually experimenting.

----------------------------------------------------------------------