                            HYBALL INFORMATION

   UPDATE.TXT; October, 2000.  Information or program changes not yet
      encorporated into the latest printed edition of README.TXT.  Much of
      of this has now been added to the ASCII version of README.TXT, however.

 _____________________________________________________________________________
                             September, 1993

   A program is available, HYPICK by name, for selecting patterns from any one
      HYBUFx.y archive and transferring them in any chosen order to a reduced HYBUF
      archive whose extension pending renaming is .PIK.  This will be included in
      the distribution package if space ever permits.  (Aug, l994: HYPICK is now
      included in PACK5 on distribution disk 2.)

   Basenames of the HYBALL-output files that report rotation results have a new format
      replacing the old form SEEdij.H<n> wherein d is the first letter of its datafile
      precursor and ij are numerical indices inherited from the extraction pattern it
      rotates.   The new form is $<dat>ij*.H<n> wherein <dat> is a longer beginning segment
      of the rawdata filename and * is either null or a letter , depending on whether the
      input pattern whose rotation this reports carried location constraints on axes
      previously shifted by HYBLOCK.  In the latter case, the unformatted HYBLOCK-output
      file transporting the shifted intermediate pattern to HYBALL is named <dat>ij*.B<n>
      and is accompanied by a SEE-file named $<dat>ij.H<n>.  Letter  is a sequential
      index for different HYBLOCK shiftings of the same MODA-extraction pattern.

   Until very recently, HYBALL rotation has iterated planar factor shifts in a fashion
      I call "Parallel" in contrast to the "Serial" iteration more traditional in rotation
      algorithms.  (See my forthcoming "How well do criterion-optimizing routines for
      factor rotation in fact optimize?")  I have now added Serial iteration as an
      additional option in HYBALL's rotation controls, since an extensive simulation study
      has shown this to be almost as good at source recovery as Parallel at nearly twice
      the speed.  Also, the style in which control options are presented has been made
      considerably more user-friendly, including an attempt at on-line documentation.
 _______________________________________________________________________________________
                              January, 1994

   The Hyball package's front end has been massively upgraded.  The new features are
   detailed in the January 1994 README.DOC, whose ASCII version is included in PACK1.
   But here is a summary of the most salient of these.

   1.  Formerly separate program COVCOMP for computing covariances has been encorporated
       into HYDATA.  And HYDATA can now read almost any ASCII rawdata file as received,
       in particular ones exported by commercial data-processing programs.  (However,
       HYDATA still writes at user option a HYDATA-standard transcription of the source
       data for use by the HYDATA-supplement programs.)  Also, HYDATA can read names of
       the variables from the source datafile or from a namefile prepared in advance.

   2.  The names of variables can now be up to eight alphanumeric characters in length.

   3.  A new program, FIXDATA, has been added to the HYDATA-supplement utilities.  Given
       the covariances computed from HYDATA-standard transcription of a rawdata file
       with missing scores, FIXDATA replaces most if not all of its bad entries with
       estimates of their proper values.  This program is still in its early stages of
       appraisal; feedback on your success with it would be much appreciated.

   4.  Program RESCORE for computing score files on assorted functions of the
       variables in a HYDATA-standard datafile will now also output missing-data
       shadows of the input variables, where the missing-data "shadow" of a variable
       X is the binary variable whose value is 0 or 1 according to whether the subject's
       score on X is missing.  These shadows can be included in the information from
       which FIXDATA estimates missing scores.
 _____________________________________________________________________________
                                June, 1994

   Provision has been made for conducting bootstrap studies of the sampling
      noise in Hyball factor solutions.  These procedures (two versions) are
      documented in ASCII file READBOOT.DOC, which the distribution package
      contains in PACK5 along with programs BOOTSUMM and HYBOOT.  Floppy space
      limitations have required that PACK5 be put on a separate disk.

 _____________________________________________________________________________
                              January, 1997

   HYDATA is now capable of detecting pairwise nonlinear relations among the
      data variables.  When HYDATA computes covariances from an input datafile,
      the run finishes with the option either to compute bootstrap covariances
      (see June, 1994 update) or to test for nonlinearities. (Only one of these
      options is allowed on the same run.)  When this option is accepted, the
      program computes the quadratic regression of each data variable Xi upon
      each other, Xj, determines how much the squared multiple correlation
      R(i,jj) of this quad-regression exceeds the squared correlation r(i,j)
      of Xi's linear regression on Xj, and reports the distribution of both
      QR(i,j) = R(i,jj)-r(i,j) and QS(i,j) = QR(i,j)/R(i,jj).  Finally,
      the program prints out i,j,QR(i,j) for the NL largest QR-values (your
      choice of NL), followed by the same for i,j,QS(i,j).  What you do with
      this information is at present entirely up to you, though if any strong
      nonlinearities are detected you will surely want to look more closely at
      the joint distributions of those particular items and inquire into the
      possible interpretive significance of their nonlinearities.

   MODA now provides an option to minimize the number of negative correlations
      in the array factored.  The program reports how much reduction in
      negatives can be accomplished by a reported number of reflections and,
      if this all-or-none option is accepted, includes identification of which
      variables are reflected in both its own SEE-report and the extraction
      patterns (M-files) passed as input to HYBLOCK and HYBALL.  (Note:
      Reflection of X-set variables is not currently available.)

          The information content of MODA's SEE-report of results has also been
      improved. (1) Each solution's listing of uniquenesses in the order of
      item (variable) indices is now followed by a listing of item indices in
      order of decreasing uniquenesses.  If items need to be discarded, this
      makes it easy to spot which ones have least in common with the others.
      (2) Print of the items' residual covariances (their parts unaccounted
      for by the extracted factors), which is unmanageably huge for large data
      arrays, has been replaced by a list of the item pairs having the largest
      residual covariances.   How many of these are reported is a graded option
      exercised interactively in light of on-screen information about the
      distribution of these residuals.

   HYBALL now shows, at bottom of each displayed factor pattern, the root-mean-
      square (RMS) of loadings in each pattern column.  And it is simple (by
      two keystrokes at the Main Menu) to permute a pattern's columns into
      order of decreasing RMS loadings.  (When the pattern is under HYBLOCK
      rotation constraints, any stipulated permutation is carried out only
      within factor blocks; that is, block order remains unchanged.)

          Also, HYBALL has been given considerable flexibility in how it
      selectively treats Waif factors received from HYBLOCK.  When you learn
      enough about HYBLOCK to understand what Waifs are, you will find
      HYBALL's Waif options adquately explained on screen.  Prior to that,
      however, HYBLOCK allows Waifs to be studied and absorbed where
      appropriate into the nonWaif factor blocks with sufficient finality
      that you will generally want to accept HYBALL's default option of
      deleting all Waifs from the the factors received from HYBLOCK.

   HYLOG now commences its SEE-output of pattern appraisals with a frequency
      distribution of the data variables' communalities.  (These are the same
      for all patterns examined on that HYLOG run.)

   TWOLOGS can now compare factor solutions derived from different COV-files
      so long as the latter originate from the same rawdata file, and allows
      simultaneous display of two patterns chosen for detailed comparison in
      a juxtaposed fashion that is exceptionally perspicuous even for very
      large patterns.

 _____________________________________________________________________________
                              August, 1997

          The Hyball system's problem-size capacity has now been stretched
      to a maximum of 170 factored variables (NV) and 30 common factors (NF).
      HYDATA can read rawdata files with many more input variables than 170
      and compute simultaneous covariances for any selection of those up to
      NV = 180 (which can be pushed to 190 if wanted).  And MODA can read and
      allow selections from any covariance file that HYDATA can compute.  But
      MODA can factor covariances for at most 170 dependent (Y-set) variables
      albeit its number of fixed-input (X-set) variables is not tightly
      constrained, and the extraction patterns it writes for input to HYBLOCK
      or HYBALL are limited to NV (Y-set + X-set)  170 and NF  30.
        >>> These problem-size limits no longer apply -- See June 1999 <<<

   HYBLOCK now allows study of rotated loadings on provisional Waif factors
      before finalizing your choice of block factors.  This facilitates your
      judgment not merely whether too much common variance has been Waifed
      but also which specific blocks seem most underfactored.

   HYBALL's repertoire of rotation methods now includes an OBLIMIN option,
      with its Gamma parameter selected by special adaptation of Hyball
      control CV as explained in its on-screen documentation.  It turns out,
      as theory predicts, that OBLIMIN may well perform even better than
      HYBALL's native rotation alternatives when the source pattern's factor
      complexity approaches ideal independent-clusters simplicity.

 _____________________________________________________________________________
                            December, 1997

   MODA now defaults to special treatment for variables it diagnoses to be
      binaries (that is, dichotomies scored 0/1).  The cues for this are set
      in HYDATA, wherein variables with binary scoring remain so-scaled when
      their covariances with other variables are computed, unlike the non-
      binaries which are renormed to unit variance when their covariances are
      computed.  (HYDATA also allows dichotomies that aren't scored 0/1 in
      the raw data to be declared binary.)  The variances that MODA receives
      from HYDATA are thus 1.0 for non-binary variables but less than that
      (specifically, equalling M*(1-M) for the item's Mean) when a variable is
      binary.  MODA presumes that binary variables are intended for inclusion
      in the factoring's X-set (manifest inputs) and defaults to that treatment
      unless you stipulate otherwise.  >>> See June 1999 for update on binaries.

          The reason for this treatment of binaries is (a) linear dependency
     models cannot accurately describe the behavior of dichotomous variables
     treated as output (albeit this can still yield informative approximations)
     but incur no nonlinearity strain at all when they are input; and
     (b) binary scaling for input dichotomies is a natural metric yielding
     regression coefficients more readily interpreted than distribution-
     relative scaling affords.  It is particularly desirable for sets of
     dummy dichotomies derived from polytomous variables.

   MODA also now enables choice of extraction dimensionality to consider the
     strength and frequency of salient item coefficients on provisionally
     extracted factors near garbage cutoff.  This encourages initial over-
     factoring by whatever method you prefer, followed by pruning back to
     dimensions of initial-extraction space on which item saliences hold
     forth some promise of meaningful interpretation.  (As a tentative rule
     of thumb, accept a factor at pruning time only if at least two items are
     salient on it at level .25 or better .30.)  If this feature proves to be
     as useful as my early experience with it suggests, it will be added to
     HYBLOCK for assisting choice of factor-block dimensionalities.
     [Note: Done.]

 _____________________________________________________________________________
                             June, 1999

   All Hyball programs are now written for Lahey LF90 compilation, under
     which problem size is virtually unlimited.  And the programs will run
     either in stand-alone DOS or a DOS window, though you may encounter a
     file-access problems if you run under an early version of Windows (early
     Win95 or before) and try to work across a large hard-drive partition.

   If you enter MODA with a covariance matrix not generated by HYDATA,
     the covariances can now be scaled however you please with or without
     decimals.  MODA presumes that these are mainly rescaled correlations
     and re-rescales by dividing all by the largest variance.  But it also
     allows "off-norm" variances to persist, that is, ones less than the
     the largest; and if any are are found, MODA interprets these in either
     of two ways:

     1) The input file may list the indices of items identified there as
        binary.  MODA takes the received variances/covariances of items
        so flagged to be their binary-scale variances/covariances and,
        although standardizing them to unit variance for factor extraction,
        includes their binary SDs in MODA's output passed to HYBLOCK and
        HYBALL.  Whereas HYBLOCK passes on any binary SDs without acting on
        them, HYBALL allows its rotated factor patterns to be exhibited with
        factors aligned with X-set items identified as originally binary
        to be rescaled in binary (0,1) metric.  (Since the corresponding
        X-items retain standardized scaling, their loadings on their
        binary-scaled factor counterparts become startlingly large.)

     2) Any off-norm variance not flagged as binary is assumed to be a
        reliability coefficient externally estimated for that item.
        This is ignored for items factored in the Y-set, but allows
        correlations with X-set items viewed as imperfectly reliable
        measures of exogenous input to be "corrected for attenuation".
        At present, the only way to substiute reliability coefficients
        for observed variances in MODA input is to edit the input file
        by hand.  (That can change if any interest arises.)  And where
        you get this reliability information is strictly up to you.

 _____________________________________________________________________________
                           October, 1999

   Traditional algorithms for factor rotation fail at locating axis positions
     whereon simple structure is manifested mainly by complexity-2 items, that
     is, items having salient loadings on just two factors.  And apart from
     mainly-complexity-2 patterns whose axes also have complexity-1 markers,
     this has previously been true of HYBALL rotation as well.  However, a
     method of adaptive item weighting that promotes recovery of complexity-2
     patterning has now been installed.  This has proved to be surprisingly
     effective in tests of small patterns wherein the target pattern has
     rather clean hyperplanes, but its value for locating complexity-2
     structure in large messy patterns still remains to be ascertained.
     This option can be activated/deactivated during a HYBALL run whenever
     the program pauses at the Main Menu:  Go to the control-settings panel
     (Main Menu Option 1), read the documentation on parameter WSAL provided
     there (not needed once you become familiar with WSAL's extended
     function), and choose a negative value of WSAL to be applied by Spin
     search.  (If you also want to change MODE then, do that first: Any
     shift in MODE returns WSAL to a non-negative default value.)
 _____________________________________________________________________________
                           September, 2000

   Further tests on the October, 1999, item-weighting procedure now
     provisionally called "Comp2 weighting" (heuristic for "weighting to
     recover simple structure wherein items of complexity-2 are prominent")
     confirm that this can indeed achieve rotation to complex simple structures
     that even HYBALL had previously been unable to disclose.  But success
     of Comp2 weights can be seriously undermined by large inequalities among
     the item communalities (this depends on which items are weak), a debility
     that can be expunged by Kaiser weighting which equalizes the variances
     of all items in common-factor space during rotation.  This surprisingly
     effective option can now be included when parameter WSAL is set for Comp2
     weighting.  See WSAL's on-screen documentation for instructions.

       Provision has now been made for Hydata-standard scorefiles (extension
     .Dij) to preserve accuracy to three significant digits even when some
     raw scores are negative.  In some cases, this requires increasing the
     D-file's constant-width read fields to width 4 rather than the previously
     invariant width 3.  [Note, August 2003:  Certain difficulties in merging
     files with different readfield widths, described in earlier releases
     of this UPDATE, have now been obviated.]
 _____________________________________________________________________________
                           January, 2001

   MODA's repertoire of common-factoring methods now includes two additional
     extraction procedures that have become favorites in modern factor-
     analytic practice, namely, Maximum Likelihood Factor Analysis (MLFA) and
     Generalized Least Squares (a variant of Minres that maximizes a likelihood
     function by an algorithm similar to MLFA's).  My still-meager performance
     study of MLFA and GLS finds no reason as yet to prefer these to their
     counterparts previously installed in MODA:  Although they make some
     small difference in the uniquenesses and factor loadings returned, the
     residual item covariances MLFA and GLS leave after extracting a given
     number NF of factors apparently tend to be slightly larger than the
     residuals of MODA's more tenured common-factoring algorithms except
     for NF deep into what would normally be considered overfactoring.

       Principal-factor and Minres extractions in MODA now terminate with
     detail on the solution's convergence and an invitation to run some
     additional iteration cycles if that seems worthwhile.  (Probably
     it often will, and incurs negligible cost in time or keystrokes.)
     Consequently, the initial setting of iteration limits for these
     procedures is even less critical than befor.

 _____________________________________________________________________________
                           November, 2001

       Expunging datafiles of variables or subject records having unacceptably
     many bad scores has been facilitated, though the datafile must still
     be transcribed by HYDATA into Hydata-standard form.  Previously, this
     required running SELECT on this D-file with precise index knowledge of
     which items or records were to be deleted.  Now when run, SELECT will
     respond to your entry of a bad-score cutoff percentage P% by deleting
     (more precisely, not selecting) all records whose scores are over P%
     missing or otherwise unusable.  And when running FIXDATA to impute bad
     scores, you can delete items having zero variance or excessive bad
     scores by entering just a cutoff level.  Information about the
     distribution of bad scores over variables, and over records, can be
     respectively found in the LOG-file for HYDATA'S scrutiny of this
     dataset and on screen in the initial phase of FIXDATA's run on it.

 ##############################################################################

 **** NOTE.  If you have ever done any software development, you know that
             program bugs and documentation obscurities are even more certain
             than death and taxes.  If you encounter any such problem with this
             package, please notify me so that I can fix it.  Alternatively,
             if you have no trouble with it at all, I would find it helpful to
             be advised of that.  Either way, my e-mail address is

                           rozeboom@ualberta.ca

 ##############################################################################

         REPORT ON OPTIMALITIES IN CHOICE OF HYBALL PARAMETERS  (Summer, l993)

      Reported here is a massive study of HYBALL's success at recovery of complex source
  patterns from artificial data under comprehensive variation in its procedure options.
  Although this work needs further extension, what I have already learned is an enormous
  advance beyond the primitive parameter appraisals reported in Rozeboom, 1991b.  These
  new findings will not mean much to you until you have familiarized yourself with the
  multiplicity of control options in HYBALL; but after some experience with those you
  should find the present summary of tentative conclusions well worth your attention.

      The source patterns whose recovery has been tested in this study were constructed
  as follows:  (a) Each comprised 25 data variables ("items") having 5 common factors.
  (b) For each of 100 datasets, a raw pattern of common-factor loadings was created by
  first stipulating the locations of salient vs. nonsalient loadings and then randomly
  generating the loadings under size and sign constraints differing for the two salience
  categories.  The salience locations in each pattern realized all the 25 different ways
  to assign factor-complexity level 1, 2, or 3 to items having 5 common factors. (An
  item's factor complexity is the number of common factors on which it has salient
  loadings.)  Salient raw loadings were generated with uniform probability in the size
  interval .25 - 1.0 and assigned negative sign with probability .0, .1, .2, or .4,
  whereas nonsalient loadings were uniformly randomized in interval W for a hyperplane-
  noise parameter W whose setting was variously .00, .05, .10, .15, or .20. (Increasing
  values of W create increasing severity of hyperplane noise, that is, blurring of the
  distinctness with which a concentration of near-zero item loadings demark optimal
  axis positioning.)  (c) For each dataset, uniquenesses were assigned to the items with
  uniform probability in the interval .30 - .90; the factors were assigned correlations
  by a procedure more complex than merits detailing here but varied these roughly between
  -.4 and .6; and the raw pattern was rescaled by rows to yield unit-variance items with
  these assigned uniquenesses and factor correlations.  (The item-specificity of this
  pattern rescaling further degrades the sharpness of hyperplanes in the source structure.)
  (d) Finally, each dataset from which factor recovery was tested consisted of 400 cases
  ("subjects") randomly sampled from the population in which the item variances and
  factor correlations were defined, thereby additionally blurring the recoverable
  hyperplanes by moderate but appreciable sampling noise.

      I describe the results from this study as "tentative" because one must always be
  cautious when generalizing from artificial data to the real world.  In the present
  case, although the source patterns whose recovery was tested varied widely within
  frame limits, they still sample only a restricted space of the source complexities
  likely to underlie real data; and there is always some chance that some of HYBALL's
  procedure variants that were suboptimal in the present tests happen to be especially
  well suited to the individuality of your problem.  You will of course have no way
  to know when if ever that is true; so give preference to the control settings that
  have proved best in the artificial-data trials; but don't be reluctant to try other
  settings as well in case one or another might stumble upon a solution you find
  especially appealing.

     Ŀ
      Resume reading here after you have tried some HYBALL runs and have  
      acquired some familiarity with its repertoire of rotation controls. 
     

      Selection of rotation controls is one node of HYBALL's application process at which
  user decisions significantly affect outcome.  A second arises once HYBALL has produced
  an archive of rotated patterns from which one or two are to be selected for interpre-
  tation and whatever actions ensue from provisionally construing this pattern as the
  best estimate of your data's source structure.  The first decision node comprises a
  choice of rotation parameters JA, JB, BH, CV, WSAL, and some minor parameters that
  are best ignored (i.e. left at default values) unless feedback during execution urges
  otherwise.  The second selection, one's final pick of results, is best made in light
  of the pattern-quality ratings obtained by running HYLOG on the archived HYBALL
  rotations.  These appraisals at present include for each stored pattern (a) hyperplane
  counts at your choices of hyperplane bandwidth BH; (b) measures of the items' factor
  complexity in this pattern; (c) measures of the prominance of gaps in the scatter of
  factor loadings that may demark natural hyperplanes (cf. Cattell & Gorsuch, Psycho-
  metrika, 1963); and (d) hyperplane misfit measured by some <JA,JB,BH,CV,WSAL>-selected
  loss function that the rotation algorithm can be set to minimize.  Before describing
  the specific apparent optimalities for these choices, some prefatory comments are in
  order:

      1.  The accumulating evidence is clear that HYBALL generally achieves its best
  results from rotation by thorough Spin search under Brute-force scanning (SCAN)
  rather than Step-down regression (STEP).  But if you are running an older machine
  and your problem size is quite large, say well over 100 variables and 20+ factors,
  you may find extensive Spin search under SCAN intolerably protracted.

        [ Note added 2/5/03:  At modern computer speeds, megaSpin search  ]
        [ with Try counts in the hundreds or even thousands is not merely ]
        [ feasible, even under SCAN, but often quite informative as well. ]

  Even so, a short Spin search is preferable to no Spin at all, and rotation under
  STEP rather than SCAN will decrease computation time by at least half an order
  of magnitude.  I have now extended the present study to include recovery by Spin
  search under STEP as well as under SCAN, and am pleased to report that although
  this confirms the general superiority of SCAN, the difference is not as great as
  I had expected.  Indeed, at the two lowest hyperplane-noise levels, STEP is as
  good as if not better than SCAN, and SCAN's modest but by no means negligible
  superiority at the intermediate noise levels diminishes as hyperplanes become
  nearly indiscernable.  Be alerted, however, that rotation-parameter optimalities
  appear to be not altogether the same for STEP as for SCAN.

      2.  All present recovery results were obtained under within-hyperplane curvature
  setting CV = 1.0, since previous findings had strongly intimated that larger CV should
  be virtually indistinguishable from CV = 1 while the most extreme alternative to this,
  CV = -1, incurs considerable loss of recovery accuracy despite yielding higher hyper-
  plane counts.  Although the effect of varying CV still needs more testing, I anticipate
  that the work now in progress on this will confirm that little if anything is to be
  gained by shifting CV from its present 1.0 default value. [ *** For update on this
  this point, see last entry below. *** ]

      3.  Of the assorted pattern-quality measures now available in HYLOG, only the
  hyperplane-misfit ratings should be used for judging which rotations to prefer for
  interpretation.  The GAP measures have negligible relation to the rotated pattern's
  match to source; and although the best rotations do tend to have higher hyperplane
  counts and lower factor complexities than the less accurate ones, the diagnostic value
  of these ratings proves to be considerably less than that of the misfit measures.

      Here is what this study has revealed about the relative merits of the hyperplane-
  misfit parameters when tested by Spin search under CV = 1.0.

  SUMMARY EVALUATION OF THE ROTATION PARAMETERS

  JA: Alternatives tested, -3, -1, 0, 1, 2, 4.
      Under SCAN.
        Any JA in range 0 - 4 was decent at all levels of hyperplane noise, with no
        sharp differences between adjacent JA settings.  But the best setting was at
        top of this range for low noise levels and moved down with increasing noise
        to the region around 1 at the highest level tested.
      Under STEP:
        Essentially the same as under SCAN, except that JA settings 2 and 4 were not
        quite so good as 1 at the lowest noise levels and became mildly better, rather
        than worse, as hyperplane noise became severe.
      Recommendations:
        Take JA = 2 for default, or perhaps JA = 1 if you are rotating under SCAN.

  JB: Alternatives tested, 2, 4, 6.
      Under SCAN:
        JB setting 6 was much inferior to both 4 and 2 at all hyperplane-noise levels.
        Setting 2 was always best, but was appreciably superior to 4 only at the higher
        noise levels.
      Under STEP:
        Differences among the JB settings were weak and not altogether consistent in
        their observed response to hyperplane noise.  But in general, settings 4 and 6
        were about equally successful and somewhat better than 2.
      Recommendations:
        JB = 2 is clearly the default choice for SCAN rotation, whereas JB = 4 appears
        to be a better default for STEP.

  BH: Alternatives tested, .10, .15, .20, .25, .30.
      Under SCAN:
        BH in range .20 - .25 seemed distinctly preferable, with .20 best when hyperplanes
        were rather clean while .25 was better for noisy ones.
      Under STEP:
        BH levels .15 and .20 were best for the cleanest hyperplanes.  But there was
        little difference over the full tested BH range at intermediate to high hyper-
        noise levels except for some hint that .25 might be best for high noise.
      Recommendations:
        Take BH = .20 as default for both rotation modes, but also try level .25 and,
        if you do STEP, perhaps .15 as well.

  WSAL: Alternatives tested, 0, 1.0, 2.0.
      Under SCAN:
        Much to my surprise, WSAL = 1.0 proved substantially superior to WSAL = 0.
        But 0, in turn, was equally superior to 2.0 except at the highest levels
        of hyperplane noise, at which there was some inconsistency of results.
        (WSAL = 2.0  has not been tested as thoroughly as the lower settings.)
      Under STEP:
        At all levels of hyperplane noise, WSAL = 0 was much preferable to setting 1.0
        which in turn was much better than 2.0.
      Recommendations:
        Take WSAL = 0 as default for STEP rotations, but switch to WSAL = 1.0 for SCAN.
        (Possibly some setting between 0 and 1 is better yet; but even if that proves
        to be so its use will incur an appreciable decrement in computation speed.)


  EVALUATION OF HYPERPLANE-MISFIT PARAMETERS IN POSTMORTUM PATTERN APPRAISAL

      Appraising the relative quality of each pattern in a collection of alternative
  HYBALL rotations of the same initial pattern attempts to pick the one most similar
  to the source pattern conjectured to underlie the data by rating these rotated
  patterns on one or more measures whose most favored values are hoped to signal
  best match to source.  The study summarized here makes clear that of the assorted
  rating measures now available in HYLOG, the family of hyperplane-misfit measures
  parameterized by <JA,JB,BH,WSAL,CV> (the same as users' choice for rotation control)
  work best.  It turns out that the expected match-to-source of the archived pattern
  having least hyperplane misfit under the chosen parameterization of this is generally
  affected very little by one's specific parameter choice so long as CV = 1.0.  Even
  so, here are the postmortum preferences suggested by the present study.
      Note. Since the difference between STEP and SCAN is only in the manner by which
  HYBALL seeks to optimize the selected measure of hyperplane misfit, the only way it
  might figure in postmortum pattern appraisals is if patterns found under SCAN tend
  to have some configural difference from patterns found under STEP.  And indeed,
  scarcely any difference was discernable between the rating measures' response to
  to patterns produced in one mode rather than the other.  BH is perhaps an exception,
  but at worst a very mild one.

  JA: JA = -3 was consistently poorest to a small degree; but no differences appeared
      among the other studied values of JA beyond a slight inferiority of JA = 0 that
      is probably spurious.

  JB: [No post-rotation pattern appraisals with JB settings other than 2 have yet
      been undertaken.]

  BH: Differences among the tested alternatives were generally mild; but with two
      qualifications, BH settings .15 and .20 were equally best at all levels of
      hyperplane noise.  The qualifications, which may not be reliable, are (a) at
      the two lowest noise levels, BH was equally best for STEP solutions at settings
      .10 and .15; and (b) for SCAN solutions at the highest noise level, BH settings
      .25, and .30 were just as good as .20 and better than .15.

  WSAL: WSAL = 2.0 was mildly but distinctly inferior to smaller WSAL at all but
      the lowest level of hyperplane noise; but there was essentially no difference
      between WSAL = 0 and WSAL = 1.0.

 ----------------------------------------------------------------------------------

                                    ADDENDUM.

      I have now completed an informative appraisal of runtime variation in CV in
  which, using the same simulation data as before, the following levels of the
  method parameters were fully crossed in SCAN mode with JB fixed at level 2:

                        CV = -1.0, 0, 1.0, 2.0
                        JA = 1, 2, 4
                        BH = .15, .20, .25
                        WSAL = 0.0, 1.0

  In decreasing order of prominence, the highlights of this study are:

  CV:  As expected, CV = -1 was MUCH inferior to the other tested CV levels.
       CV = 0 was mildly inferior to settings 1 and 2, and there was some
       indication that 2 might be even better than 1.  But superiority of 2
       over 1 appears inconclusive and trivial at best.

  BH:  At the lowest levels of hyperplane noise, BH level .15 was appreciably
       better than .20, which in turn greatly surpassed .25.  This preference
       ordering weakened to vanishing as hyperplane noise became severe, but
       at no noise level did .20, much less .25, become clearly supeior to .15.
       So this study does not entirely agree with the one reported above on
       optimal BH for SCAN.  We can, however, provisionally conclude that BH
       should be more or less in the range .15 - .20 with the high end not to
       be neglected even though it is not so clear that this should be BH's
       default value.

  WSAL:  As in the previous SCAN study, WSAL = 1.0 continued its pronounced
       superiority over WSAL = 0 under SCAN at the higher hyperplane-noise
       levels, with the preference diminishing if not vanishing altogether
       as the hyperplanes become ideally clean.

  JA:  Evidence on optimal JA was not entirely unambiguous: One method of
       appraising this clearly favored JA = 1, whereas another found no
       consistent superiority of level 1 over level 2.  Both agreed, however,
       that the weak preference for JA = 1 is strongest when hyperplanes are
       very clean, and moves toward weakly favoring JA = 2 as hyperplane noise
       increases--which is not entirely consistent with the effect of hyperplane
       noise on optimal JA suggested by the earlier SCAN study.  Even so, both
       studies agree that JA = 1 is probably best under SCAN but wih little to
       choose over JA = 2.

