  Orange Book Preamble
  NCSC/DOD/NIST
  December 1985 (translation to SGML 1996/12/29)

  DEPARTMENT OF DEFENSE STANDARD: DEPARTMENT OF DEFENSE TRUSTED COMPUTER
  SYSTEM EVALUATION CRITERIA (Aka. _T_h_e _O_r_a_n_g_e _B_o_o_k).  DoD 5200.28-STD;
  Supersedes; CSC-STD-00l-83, dtd l5 Aug 83; Library No. S225,7ll



























































  11..  FFOORREEWWOORRDD

  December 26, 1985


  This publication, DoD 5200.28-STD, "Department of Defense Trusted
  Computer System Evaluation Criteria," is issued under the authority of
  and in accordance with DoD Directive 5200.28, "Security Requirements
  for Automatic Data Processing (ADP) Systems," and in furtherance of
  responsibilities assigned by DoD Directive 52l5.l, "Computer Security
  Evaluation Center."  Its purpose is to provide technical
  hardware/firmware/software security criteria and associated technical
  evaluation methodologies in support of the overall ADP system security
  policy, evaluation and approval/accreditation responsibilities
  promulgated by DoD Directive 5200.28.


  The provisions of this document apply to the Office of the Secretary
  of Defense (ASD), the Military Departments, the Organization of the
  Joint Chiefs of Staff, the Unified and Specified Commands, the Defense
  Agencies and activities administratively supported by OSD (hereafter
  called "DoD Components").


  This publication is effective immediately and is mandatory for use by
  all DoD Components in carrying out ADP system technical security
  evaluation activities applicable to the processing and storage of
  classified and other sensitive DoD information and applications as set
  forth herein.


  Recommendations for revisions to this publication are encouraged and
  will be reviewed biannually by the National Computer Security Center
  through a formal review process.  Address all proposals for revision
  through appropriate channels to:  National Computer Security Center,
  Attention:  Chief, Computer Security Standards.


  DoD Components may obtain copies of this publication through their own
  publications channels.  Other federal agencies and the public may
  obtain copies from:  Office of Standards and Products, National
  Computer Security Center, Fort Meade, MD  20755-6000, Attention:
  Chief, Computer Security Standards.



  _D_o_n_a_l_d _C_. _L_a_t_h_a_m
  Assistant Secretary of Defense
  (Command, Control, Communications, and Intelligence)

















  22..  AACCKKNNOOWWLLEEDDGGEEMMEENNTTSS

  Special recognition is extended to Sheila L. Brand, National Computer
  Security Center (NCSC), who integrated theory, policy, and practice
  into and directed the production of this document.


  Acknowledgment is also given for the contributions of: Grace Hammonds
  and Peter S. Tasker, the MITRE Corp., Daniel J. Edwards, NCSC, Roger
  R. Schell, former Deputy Director of NCSC, Marvin Schaefer, NCSC, and
  Theodore M. P. Lee, Sperry Corp., who as original architects
  formulated and articulated the technical issues and solutions
  presented in this document; Jeff Makey, formerly NCSC, Warren F.
  Shadle, NCSC, and Carole S. Jordan, NCSC, who assisted in the
  preparation of this document; James P. Anderson, James P. Anderson &
  Co., Steven B. Lipner, Digital Equipment Corp., Clark Weissman, System
  Development Corp., LTC Lawrence A. Noble, formerly U.S. Air Force,
  Stephen T. Walker, formerly DoD, Eugene V. Epperly, DoD, and James E.
  Studer, formerly Dept. of the Army, who gave generously of their time
  and expertise in the review and critique of this document; and
  finally, thanks are given to the computer industry and others
  interested in trusted computing for their enthusiastic advice and
  assistance throughout this effort.


  SGML translation by (and corrections to): _A_n_d_r_e_w _G_. _M_o_r_g_a_n
  <morgan@parc.power.net>.







































  33..  PPRREEFFAACCEE

  The trusted computer system evaluation criteria defined in this
  document classify systems into four broad hierarchical divisions of
  enhanced security protection.  They provide a basis for the evaluation
  of effectiveness of security controls built into automatic data
  processing system products.  The criteria were developed with three
  objectives in mind: (a) to provide users with a yardstick with which
  to assess the degree of trust that can be placed in computer systems
  for the secure processing of classified or other sensitive
  information; (b) to provide guidance to manufacturers as to what to
  build into their new, widely-available trusted commercial products in
  order to satisfy trust requirements for sensitive applications; and
  (c) to provide a basis for specifying security requirements in
  acquisition specifications.  Two types of requirements are delineated
  for secure processing: (a) specific security feature requirements and
  (b) assurance requirements.  Some of the latter requirements enable
  evaluation personnel to determine if the required features are present
  and functioning as intended.  The scope of these criteria is to be
  applied to the set of components comprising a trusted system, and is
  not necessarily to be applied to each system component individually.
  Hence, some components of a system may be completely untrusted, while
  others may be individually evaluated to a lower or higher evaluation
  class than the trusted product considered as a whole system.  In
  trusted products at the high end of the range, the strength of the
  reference monitor is such that most of the components can be
  completely untrusted.  Though the criteria are intended to be
  application-independent, the specific security feature requirements
  may have to be interpreted when applying the criteria to specific
  systems with their own functional requirements, applications or
  special environments (e.g., communications processors, process control
  computers, and embedded systems in general).  The underlying assurance
  requirements can be applied across the entire spectrum of ADP system
  or application processing environments without special interpretation.
































  44..  IINNTTRROODDUUCCTTIIOONN

  44..11..  HHiissttoorriiccaall PPeerrssppeeccttiivvee

  In October 1967, a task force was assembled under the auspices of the
  Defense Science Board to address computer security safeguards that
  would protect classified information in remote-access, resource-
  sharing computer systems.  The Task Force report, "Security Controls
  for Computer Systems," published in February 1970, made a number of
  policy and technical recommendations on actions to be taken to reduce
  the threat of compromise of classified information processed on
  remote-access computer systems.  [``(see)'' 38] Department of Defense
  Directive 5200.28 and its accompanying manual DoD 5200.28-M, published
  in 1972 and 1973 respectively, responded to one of these
  recommendations by establishing uniform DoD policy, security
  requirements, administrative controls, and technical measures to
  protect classified information processed by DoD computer systems.
  [``(see)'' 11;``(see)'' 12] Research and development work undertaken
  by the Air Force, Advanced Research Projects Agency, and other defense
  agencies in the early and mid 70's developed and demonstrated solution
  approaches for the technical problems associated with controlling the
  flow of information in resource and information sharing computer
  systems.  [``(see)'' 1] The DoD Computer Security Initiative was
  started in 1977 under the auspices of the Under Secretary of Defense
  for Research and Engineering to focus DoD efforts addressing computer
  security issues.  [``(Elec-ref)'' 33;``(Printed-ref)'' 37]


  Concurrent with DoD efforts to address computer security issues, work
  was begun under the leadership of the National Bureau of Standards
  (NBS) to define problems and solutions for building, evaluating, and
  auditing secure computer systems.  [``(Elec-ref)'' 17;``(Printed-
  ref)'' 21] As part of this work NBS held two invitational workshops on
  the subject of audit and evaluation of computer security.  [``(see)''
  24;``(see)'' 32] The first was held in March 1977, and the second in
  November of 1978.  One of the products of the second workshop was a
  definitive paper on the problems related to providing criteria for the
  evaluation of technical computer security effectiveness.  [``(see)''
  24] As an outgrowth of recommendations from this report, and in
  support of the DoD Computer Security Initiative, the MITRE Corporation
  began work on a set of computer security evaluation criteria that
  could be used to assess the degree of trust one could place in a
  computer system to protect classified data.  [``(see)'' 28;``(see)''
  29;``(see)'' 35] The preliminary concepts for computer security
  evaluation were defined and expanded upon at invitational workshops
  and symposia whose participants represented computer security
  expertise drawn from industry and academia in addition to the
  government.  Their work has since been subjected to much peer review
  and constructive technical criticism from the DoD, industrial research
  and development organizations, universities, and computer
  manufacturers.


  The DoD Computer Security Center (the Center) was formed in January
  1981 to staff and expand on the work started by the DoD Computer
  Security Initiative.  [``(see)'' 19] A major goal of the Center as
  given in its DoD Charter is to encourage the widespread availability
  of trusted computer systems for use by those who process classified or
  other sensitive information.  [``(see)'' 13] The criteria presented in
  this document have evolved from the earlier NBS and MITRE evaluation
  material.





  44..22..  SSccooppee

  The trusted computer system evaluation criteria defined in this
  document apply primarily to trusted commercially available automatic
  data processing (ADP) systems.  They are also applicable, as amplified
  below, the the evaluation of existing systems and to the specification
  of security requirements for ADP systems acquisition.  Included are
  two distinct sets of requirements: 1) specific security feature
  requirements; and 2) assurance requirements.  The specific feature
  requirements encompass the capabilities typically found in information
  processing systems employing general-purpose operating systems that
  are distinct from the applications programs being supported.  However,
  specific security feature requirements may also apply to specific
  systems with their own functional requirements, applications or
  special environments (e.g., communications processors, process control
  computers, and embedded systems in general).  The assurance
  requirements, on the other hand, apply to systems that cover the full
  range of computing environments from dedicated controllers to full
  range multilevel secure resource sharing systems.


  44..33..  PPuurrppoossee

  As outlined in the Preface, the criteria have been developedto serve a
  number of intended purposes:



  +o  To provide a standard to manufacturers as to what security features
     to build into their new and planned, commercial products in order
     to provide widely available systems that satisfy trust requirements
     (with particular emphasis on preventing the disclosure of data) for
     sensitive applications.

  +o  To provide DoD components with a metric with which to evaluate the
     degree of trust that can be placed in computer systems for the
     secure processing of classified and other sensitive information.

  +o  To provide a basis for specifying security requirements in
     acquisition specifications.


  With respect to the second purpose for development of the criteria,
  i.e., providing DoD components with a security evaluation metric,
  evaluations can be delineated into two types: (a) an evaluation can be
  performed on a computer product from a perspective that excludes the
  application environment; or, (b) it can be done to assess whether
  appropriate security measures have been taken to permit the system to
  be used operationally in a specific environment.  The former type of
  evaluation is done by the Computer Security Center through the
  Commercial Product Evaluation Process.  That process is described in
  Appendix A.


  The latter type of evaluation, i.e., those done for the purpose of
  assessing a system's security attributes with respect to a specific
  operational mission, is known as a certification evaluation.  It must
  be understood that the completion of a formal product evaluation does
  not constitute certification or accreditation for the system to be
  used in any specific application environment.  On the contrary, the
  evaluation report only provides a trusted computer system's evaluation
  rating along with supporting data describing the product system's
  strengths and weaknesses from a computer security point of view.  The
  system security certification and the formal approval/accreditation
  procedure, done in accordance with the applicable policies of the
  issuing agencies, must still be followed-before a system can be
  approved for use in processing or handling classified information.
  [``(see)'' 11;``(see)'' 12] Designated Approving Authorities (DAAs)
  remain ultimately responsible for specifying security of systems they
  accredit.


  The trusted computer system evaluation criteria will be used directly
  and indirectly in the certification process.  Along with applicable
  policy, it will be used directly as technical guidance for evaluation
  of the total system and for specifying system security and
  certification requirements for new acquisitions.  Where a system being
  evaluated for certification employs a product that has undergone a
  Commercial Product Evaluation, reports from that process will be used
  as input to the certification evaluation.  Technical data will be
  furnished to designers, evaluators and the Designated Approving
  Authorities to support their needs for making decisions.


  44..44..  FFuunnddaammeennttaall CCoommppuutteerr SSeeccuurriittyy RReeqquuiirreemmeennttss

  Any discussion of computer security necessarily starts from a
  statement of requirements, i.e., what it really means to call a
  computer system "secure."  In general, secure systems will control,
  through use of specific security features, aacccceessss ttoo iinnffoorrmmaattiioonn such
  that only properly authorized individuals, or processes operating on
  their behalf, will have access to read, write, create, or delete
  information.  Six fundamental requirements are derived from this basic
  statement of objective: four deal with what needs to be provided to
  control access to information; and two deal with how one can obtain
  credible assurances that this is accomplished in a trusted computer
  system.


  44..44..11..  PPoolliiccyy


     RReeqquuiirreemmeenntt 11 -- SSEECCUURRIITTYY PPOOLLIICCYY --
        TThheerree mmuusstt bbee aann eexxpplliicciitt aanndd wweellll--ddeeffiinneedd sseeccuurriittyy ppoolliiccyy
        eennffoorrcceedd bbyy tthhee ssyysstteemm..  Given identified subjects and objects,
        there must be a set of rules that are used by the system to
        determine whether a given subject can be permitted to gain
        access to a specific object.  Computer systems of interest must
        enforce a mandatory security policy that can effectively
        implement access rules for handling sensitive (e.g., classified)
        information.  [``(Elec-ref)'' 7; ``(Paper-ref)'' 10] These rules
        include requirements such as: No person lacking proper personnel
        security clearance shall obtain access to classified
        information.  In addition, discretionary security controls are
        required to ensure that only selected users or groups of users
        may obtain access to data (e.g., based on a need-to-know).


     RReeqquuiirreemmeenntt 22 -- MMAARRKKIINNGG --
        AAcccceessss ccoonnttrrooll llaabbeellss mmuusstt bbee aassssoocciiaatteedd wwiitthh oobbjjeeccttss..  In order
        to control access to information stored in a computer, according
        to the rules of a mandatory security policy, it must be possible
        to mark every object with a label that reliably identifies the
        object's sensitivity level (e.g., classification), and/or the
        modes of access accorded those subjects who may potentially
        access the object.






  44..44..22..  AAccccoouunnttaabbiilliittyy



     RReeqquuiirreemmeenntt 33 -- IIDDEENNTTIIFFIICCAATTIIOONN --
        IInnddiivviidduuaall ssuubbjjeeccttss mmuusstt bbee iiddeennttiiffiieedd..  Each access to
        information must be mediated based on who is accessing the
        information and what classes of information they are authorized
        to deal with.  This identification and authorization information
        must be securely maintained by the computer system and be
        associated with every active element that performs some
        security-relevant action in the system.


     RReeqquuiirreemmeenntt 44 -- AACCCCOOUUNNTTAABBIILLIITTYY --
        AAuuddiitt iinnffoorrmmaattiioonn mmuusstt bbee sseelleeccttiivveellyy kkeepptt aanndd pprrootteecctteedd ssoo tthhaatt
        aaccttiioonnss aaffffeeccttiinngg sseeccuurriittyy ccaann bbee ttrraacceedd ttoo tthhee rreessppoonnssiibbllee
        ppaarrttyy..  A trusted system must be able to record the occurrences
        of security-relevant events in an audit log.  The capability to
        select the audit events to be recorded is necessary to minimize
        the expense of auditing and to allow efficient analysis.  Audit
        data must be protected from modification and unauthorized
        destruction to permit detection and after-the-fact
        investigations of security violations.



  44..44..33..  AAssssuurraannccee



     RReeqquuiirreemmeenntt 55 -- AASSSSUURRAANNCCEE --
        TThhee ccoommppuutteerr ssyysstteemm mmuusstt ccoonnttaaiinn hhaarrddwwaarree//ssooffttwwaarree mmeecchhaanniissmmss
        tthhaatt ccaann bbee iinnddeeppeennddeennttllyy eevvaalluuaatteedd ttoo pprroovviiddee ssuuffffiicciieenntt
        aassssuurraannccee tthhaatt tthhee ssyysstteemm eennffoorrcceess rreeqquuiirreemmeennttss 11 tthhrroouugghh 44
        aabboovvee..  In order to assure that the four requirements of
        Security Policy, Marking, Identification, and Accountability are
        enforced by a computer system, there must be some identified and
        unified collection of hardware and software controls that
        perform those functions.  These mechanisms are typically
        embedded in the operating system and are designed to carry out
        the assigned tasks in a secure manner.  The basis for trusting
        such system mechanisms in their operational setting must be
        clearly documented such that it is possible to independently
        examine the evidence to evaluate their sufficiency.


     RReeqquuiirreemmeenntt 66 -- CCOONNTTIINNUUOOUUSS PPRROOTTEECCTTIIOONN --
        TThhee ttrruusstteedd mmeecchhaanniissmmss tthhaatt eennffoorrccee tthheessee bbaassiicc rreeqquuiirreemmeennttss
        mmuusstt bbee ccoonnttiinnuuoouussllyy pprrootteecctteedd aaggaaiinnsstt ttaammppeerriinngg aanndd//oorr
        uunnaauutthhoorriizzeedd cchhaannggeess..  No computer system can be considered
        truly secure if the basic hardware and software mechanisms that
        enforce the security policy are themselves subject to
        unauthorized modification or subversion.  The continuous
        protection requirement has direct implications throughout the
        computer system's life-cycle.



  These fundamental requirements form the basis for the individual
  evaluation criteria applicable for each evaluation division and class.
  The interested reader is referred to Section 5 of this document,
  "Control Objectives for Trusted Computer Systems," for a more complete
  discussion and further amplification of these fundamental requirements
  as they apply to general-purpose information processing systems and to
  Section 7 for amplification of the relationship between Policy and
  these requirements.



  44..55..  SSttrruuccttuurree ooff tthhee DDooccuummeenntt

  The remainder of this document is divided into two parts, four
  appendices, and a glossary.  Part I (Sections 1 through 4) presents
  the detailed criteria derived from the fundamental requirements
  described above and relevant to the rationale and policy excerpts
  contained in Part II.


  Part II (Sections 5 through 10) provides a discussion of basic
  objectives, rationale, and national policy behind the development of
  the criteria, and guidelines for developers pertaining to: mandatory
  access control rules implementation, the covert channel problem, and
  security testing.  It is divided into six sections.  Section 5
  discusses the use of control objectives in general and presents the
  three basic control objectives of the criteria.  Section 6 provides
  the theoretical basis behind the criteria.  Section 7 gives excerpts
  from pertinent regulations, directives, OMB Circulars, and Executive
  Orders which provide the basis for many trust requirements for
  processing nationally sensitive and classified information with
  computer systems.  Section 8 provides guidance to system developers on
  expectations in dealing with the covert channel problem.  Section 9
  provides guidelines dealing with mandatory security.  Section 10
  provides guidelines for security testing.  There are four appendices,
  including a description of the Trusted Computer System Commercial
  Products Evaluation Process (Appendix A), summaries of the evaluation
  divisions (Appendix B) and classes (Appendix C), and finally a
  directory of requirements ordered alphabetically.  In addition, there
  is a glossary.


  44..66..  SSttrruuccttuurree ooff tthhee CCrriitteerriiaa

  The criteria are divided into four divisions: D, C, B, and A ordered
  in a hierarchical manner with the highest division (A) being reserved
  for systems providing the most comprehensive security.  Each division
  represents a major improvement in the overall confidence one can place
  in the system for the protection of sensitive information.  Within
  divisions C and B there are a number of subdivisions known as classes.
  The classes are also ordered in a hierarchical manner with systems
  representative of division C and lower classes of division B being
  characterized by the set of computer security mechanisms that they
  possess.  Assurance of correct and complete design and implementation
  for these systems is gained mostly through testing of the security-
  relevant portions of the system.  The security-relevant portions of a
  system are referred to throughout this document as the Trusted
  Computing Base (TCB).  Systems representative of higher classes in
  division B and division A derive their security attributes more from
  their design and implementation structure.  Increased assurance that
  the required features are operative, correct, and tamperproof under
  all circumstances is gained through progressively more rigorous
  analysis during the design process.


  Within each class, four major sets of criteria are addressed.  The
  first three represent features necessary to satisfy the broad control
  objectives of Security Policy, Accountability, and Assurance that are
  discussed in Part II, Section 5.  The fourth set, Documentation,
  describes the type of written evidence in the form of user guides,
  manuals, and the test and design documentation required for each
  class.

  A reader using this publication for the first time may find it helpful
  to first read Part II, before continuing on with Part I.
































































  55..  GGLLOOSSSSAARRYY



     AAcccceessss
        - A specific type of interaction between a subject and an object
        that results in the flow of information from one to the other.


     AApppprroovvaall//AAccccrreeddiittaattiioonn
        - The official authorization that is granted to an ADP system to
        process sensitive information in its operational environment,
        based upon comprehensive security evaluation of the system's
        hardware, firmware, and software security design, configuration,
        and implementation and of the other system procedural,
        administrative, physical, TEMPEST, personnel, and communications
        security controls.


     AAuuddiitt TTrraaiill
        - A set of records that collectively provide documentary
        evidence of processing used to aid in tracing from original
        transactions forward to related records and reports, and/or
        backwards from records and reports to their component source
        transactions.


     AAuutthheennttiiccaattee
        - To establish the validity of a claimed identity.


     AAuuttoommaattiicc DDaattaa PPrroocceessssiinngg ((AADDPP)) SSyysstteemm
        - An assembly of computer hardware, firmware, and software
        configured for the purpose of classifying, sorting, calculating,
        computing, summarizing, transmitting and receiving, storing, and
        retrieving data with a minimum of human intervention.


     BBaannddwwiiddtthh
        - A characteristic of a communication channel that is the amount
        of information that can be passed through it in a given amount
        of time, usually expressed in bits per second.


     BBeellll--LLaaPPaadduullaa MMooddeell
        - A formal state transition model of computer security policy
        that describes a set of access control rules.  In this formal
        model, the entities in a computer system are divided into
        abstract sets of subjects and objects.  The notion of a secure
        state is defined and it is proven that each state transition
        preserves security by moving from secure state to secure state;
        thus, inductively proving that the system is secure.  A system
        state is defined to be "secure" if the only permitted access
        modes of subjects to objects are in accordance with a specific
        security policy.  In order to determine whether or not a
        specific access mode is allowed, the clearance of a subject is
        compared to the classification of the object and a determination
        is made as to whether the subject is authorized for the specific
        access mode.  The clearance/classification scheme is expressed
        in terms of a lattice.  See also: Lattice, Simple Security
        Property, *- Property.


     CCeerrttiiffiiccaattiioonn
        - The technical evaluation of a system's security features, made
        as part of and in support of the approval/accreditation process,
        that establishes the extent to which a particular computer
        system's design and implementation meet a set of specified
        security requirements.


     CChhaannnneell
        - An information transfer path within a system.  May also refer
        to the mechanism by which the path is effected.


     CCoovveerrtt CChhaannnneell
        - A communication channel that allows a process to transfer
        information in a manner that violates the system's security
        policy.  See also:  Covert Storage Channel, Covert Timing
        Channel.


     CCoovveerrtt SSttoorraaggee CChhaannnneell
        - A covert channel that involves the direct or indirect writing
        of a storage location by one process and the direct or indirect
        reading of the storage location by another process.  Covert
        storage channels typically involve a finite resource (e.g.,
        sectors on a disk) that is shared by two subjects at different
        security levels.


     CCoovveerrtt TTiimmiinngg CChhaannnneell
        - A covert channel in which one process signals information to
        another by modulating its own use of system resources (e.g., CPU
        time) in such a way that this manipulation affects the real
        response time observed by the second process.


     DDaattaa
        - Information with a specific physical representation.


     DDaattaa IInntteeggrriittyy
        - The state that exists when computerized data is the same as
        that in the source documents and has not been exposed to
        accidental or malicious alteration or destruction.


     DDeessccrriippttiivvee TToopp--LLeevveell SSppeecciiffiiccaattiioonn ((DDTTLLSS))
        - A top-level specification that is written in a natural
        language (e.g., English), an informal program design notation,
        or a combination of the two.


     DDiissccrreettiioonnaarryy AAcccceessss CCoonnttrrooll
        - A means of restricting access to objects based on the identity
        of subjects and/or groups to which they belong.  The controls
        are discretionary in the sense that a subject with a certain
        access permission is capable of passing that permission (perhaps
        indirectly) on to any other subject (unless restrained by
        mandatory access control).


     DDoommaaiinn
        - The set of objects that a subject has the ability to access.


     DDoommiinnaattee
        - Security level S1 is said to dominate security level S2 if the
        hierarchical classification of S1 is greater than or equal to
        that of S2 and the non-hierarchical categories of S1 include all
        those of S2 as a subset.


     EExxppllooiittaabbllee CChhaannnneell
        - Any channel that is useable or detectable by subjects external
        to the Trusted Computing Base.


     FFllaaww HHyyppootthheessiiss MMeetthhooddoollooggyy
        - A system analysis and penetration technique where
        specifications and documentation for the system are analyzed and
        then flaws in the system are hypothesized.  The list of
        hypothesized flaws is then prioritized on the basis of the
        estimated probability that a flaw actually exists and, assuming
        a flaw does exist, on the ease of exploiting it and on the
        extent of control or compromise it would provide.  The
        prioritized list is used to direct the actual testing of the
        system.


     FFllaaww
        - An error of commission, omission, or oversight in a system
        that allows protection mechanisms to be bypassed.


     FFoorrmmaall PPrrooooff
        - A complete and convincing mathematical argument, presenting
        the full logical justification for each proof step, for the
        truth of a theorem or set of theorems.  The formal verification
        process uses formal proofs to show the truth of certain
        properties of formal specification and for showing that computer
        programs satisfy their specifications.


     FFoorrmmaall SSeeccuurriittyy PPoolliiccyy MMooddeell
        - A mathematically precise statement of a security policy.  To
        be adequately precise, such a model must represent the initial
        state of a system, the way in which the system progresses from
        one state to another, and a definition of a "secure" state of
        the system.  To be acceptable as a basis for a TCB, the model
        must be supported by a formal proof that if the initial state of
        the system satisfies the definition of a "secure" state and if
        all assumptions required by the model hold, then all future
        states of the system will be secure.  Some formal modeling
        techniques include:  state transition models, temporal logic
        models, denotational semantics models, algebraic specification
        models.  An example is the model described by Bell and LaPadula
        in reference [``(see)'' 2].  See also:  Bell- LaPadula Model,
        Security Policy Model.


     FFoorrmmaall TToopp--LLeevveell SSppeecciiffiiccaattiioonn ((FFTTLLSS))
        - A Top-Level Specification that is written in a formal
        mathematical language to allow theorems showing the
        correspondence of the system specification to its formal
        requirements to be hypothesized and formally proven.


     FFoorrmmaall VVeerriiffiiccaattiioonn
        - The process of using formal proofs to demonstrate the
        consistency (design verification) between a formal specification
        of a system and a formal security policy model or
        (implementation verification) between the formal specification
        and its program implementation.


     FFrroonntt--EEnndd SSeeccuurriittyy FFiilltteerr
        - A process that is invoked to process data accordint to a
        specified security policy prior to releasing the data outside
        the processing environment or upon receiving data from an
        external source.


     FFuunnccttiioonnaall TTeessttiinngg
        - The portion of security testing in which the advertised
        features of a system are tested for correct operation.


     GGeenneerraall--PPuurrppoossee SSyysstteemm
        - A computer system that is designed to aid in solving a wide
        variety of problems.


     GGrraannuullaarriittyy
        - The relative fineness or coarseness by which a mechanism can
        be adjusted.  The phrase "the granularity of a single user"
        means the access control mechanism can be adjusted to include or
        exclude any single user.


     LLaattttiiccee
        - A partially ordered set for which every pair of elements has a
        greatest lower bound and a least upper bound.


     LLeeaasstt PPrriivviilleeggee
        - This principle requires that each subject in a system be
        granted the most restrictive set of privileges (or lowest
        clearance) needed for the performance of authorized tasks.  The
        application of this principle limits the damage that can result
        from accident, error, or unauthorized use.


     MMaannddaattoorryy AAcccceessss CCoonnttrrooll
        - A means of restricting access to objects based on the
        sensitivity (as represented by a label) of the information
        contained in the objects and the formal authorization (i.e.,
        clearance) of subjects to access information of such
        sensitivity.


     MMuullttiilleevveell DDeevviiccee
        - A device that is used in a manner that permits it to
        simultaneously process data of two or more security levels
        without risk of compromise.  To accomplish this, sensitivity
        labels are normally stored on the same physical medium and in
        the same form (i.e., machine-readable or human-readable) as the
        data being processed.


     MMuullttiilleevveell SSeeccuurree
        - A class of system containing information with different
        sensitivities that simultaneously permits access by users with
        different security clearances and needs-to- know, but prevents
        users from obtaining access to information for which they lack
        authorization.


     OObbjjeecctt
        - A passive entity that contains or receives information.
        Access to an object potentially implies access to the
        information it contains.  Examples of objects are:  records,
        blocks, pages, segments, files, directories, directory trees,
        and programs, as well as bits, bytes, words, fields, processors,
        video displays, keyboards, clocks, printers, network nodes, etc.


     OObbjjeecctt RReeuussee
        - The reassignment to some subject of a medium (e.g., page
        frame, disk sector, magnetic tape) that contained one or more
        objects.  To be securely reassigned, such media must contain no
        residual data from the previously contained object(s).


     OOuuttppuutt
        - Information that has been exported by a TCB.


     PPaasssswwoorrdd
        - A private character string that is used to authenticate an
        identity.


     PPeenneettrraattiioonn TTeessttiinngg
        - The portion of security testing in which the penetrators
        attempt to circumvent the security features of a system.  The
        penetrators may be assumed to use all system design and
        implementation documentation, which may include listings of
        system source code, manuals, and circuit diagrams.  The
        penetrators work under no constraints other than those that
        would be applied to ordinary users.


     PPrroocceessss
        - A program in execution.  It is completely characterized by a
        single current execution point (represented by the machine
        state) and address space.


     PPrrootteeccttiioonn--CCrriittiiccaall PPoorrttiioonnss ooff tthhee TTCCBB
        - Those portions of the TCB whose normal function is to deal
        with the control of access between subjects and objects.


     PPrrootteeccttiioonn PPhhiilloossoopphhyy
        - An informal description of the overall design of a system that
        delineates each of the protection mechanisms employed.  A
        combination (appropriate to the evaluation class) of formal and
        informal techniques is used to show that the mechanisms are
        adequate to enforce the security policy.


     RReeaadd
        - A fundamental operation that results only in the flow of
        information from an object to a subject.


     RReeaadd AAcccceessss
        - Permission to read information.


     RReeaadd--OOnnllyy MMeemmoorryy ((RROOMM))
        - A storage area in which the contents can be read but not
        altered during normal computer processing.


     RReeffeerreennccee MMoonniittoorr CCoonncceepptt
        - An access control concept that refers to an abstract machine
        that mediates all accesses to objects by subjects.


     RReessoouurrccee
        - Anything used or consumed while performing a function.  The
        categories of resources are: time, information, objects
        (information containers), or processors (the ability to use
        information).  Specific examples are: CPU time; terminal connect
        time; amount of directly-addressable memory; disk space; number
        of I/O requests per minute, etc.


     SSeeccuurriittyy KKeerrnneell
        - The hardware, firmware, and software elements of a Trusted
        Computing Base that implement the reference monitor concept.  It
        must mediate all accesses, be protected from modification, and
        be verifiable as correct.


     SSeeccuurriittyy LLeevveell
        - The combination of a hierarchical classification and a set of
        non-hierarchical categories that represents the sensitivity of
        information.


     SSeeccuurriittyy PPoolliiccyy
        - The set of laws, rules, and practices that regulate how an
        organization manages, protects, and distributes sensitive
        information.


     SSeeccuurriittyy PPoolliiccyy MMooddeell
        - An informal presentation of a formal security policy model.


     SSeeccuurriittyy RReelleevvaanntt EEvveenntt
        - Any event that attempts to change the security state of the
        system, (e.g., change discretionary access controls, change the
        security level of the subject, change user password, etc.).
        Also, any event that attempts to violate the security policy of
        the system, (e.g., too many attempts to login, attempts to
        violate the mandatory access control limits of a defice,
        attempts to downgrade a file, etc.).


     SSeeccuurriittyy TTeessttiinngg
        - A process used to determine that the security features of a
        system are implemented as designed and that they are adequate
        for a proposed application environment.  This process includes
        hands-on functional testing, penetration testing, and
        verification.  See also: Functional Testing, Penetration
        Testing, Verification.


     SSeennssiittiivvee IInnffoorrmmaattiioonn
        - Information that, as determined by a competent authority, must
        be protected because its unauthorized disclosure, alteration,
        loss, or destruction will at least cause perceivable damage to
        someone or something.


     SSeennssiittiivviittyy LLaabbeell
        - A piece of information that represents the security level of
        an object and that describes the sensitivity (e.g.,
        classification) of the data in the object.   Sensitivity labels
        are used by the TCB as the basis for mandatory access control
        decisions.


     SSiimmppllee SSeeccuurriittyy CCoonnddiittiioonn
        - A Bell-LaPadula security model rule allowing a subject read
        access to an object only if the security level of the subject
        dominates the security level of the object.


     SSiinnggllee--LLeevveell DDeevviiccee
        - A device that is used to process data of a single security
        level at any one time.  Since the device need not be trusted to
        separate data of different security levels, sensitivity labels
        do not have to be stored with the data being processed.


     **--PPrrooppeerrttyy ((SSttaarr PPrrooppeerrttyy))
        - A Bell-LaPadula security model rule allowing a subject write
        access to an object only if the security level of the subject is
        dominated by the security level of the object.  Also known as
        the Confinement Property.


     SSttoorraaggee OObbjjeecctt
        - An object that supports both read and write accesses.


     SSuubbjjeecctt
        - An active entity, generally in the form of a person, process,
        or device that causes information to flow among objects or
        changes the system state.  Technically, a process/domain pair.


     SSuubbjjeecctt SSeeccuurriittyy LLeevveell
        - A subject's security level is equal to the security level of
        the objects to which it has both read and write access.  A
        subject's security level must always be dominated by the
        clearance of the user the subject is associated with.


     TTEEMMPPEESSTT
        - The study and control of spurious electronic signals emitted
        from ADP equipment.


     TToopp--LLeevveell SSppeecciiffiiccaattiioonn ((TTLLSS))
        - A non-procedural description of system behavior at the most
        abstract level.  Typically a functional specification that omits
        all implementation details.


     TTrraapp DDoooorr
        - A hidden software or hardware mechanism that permits system
        protection mechanisms to be circumvented.  It is activated in
        some non-apparent manner (e.g., special "random" key sequence at
        a terminal).


     TTrroojjaann HHoorrssee
        - A computer program with an apparently or actually useful
        function that contains additional (hidden) functions that
        surreptitiously exploit the legitimate authorizations of the
        invoking process to the detriment of security.  For example,
        making a "blind copy" of a sensitive file for the creator of the
        Trojan Horse.

     TTrruusstteedd CCoommppuutteerr SSyysstteemm
        - A system that employs sufficient hardware and software
        integrity measures to allow its use for processing
        simultaneously a range of sensitive or classified information.


     TTrruusstteedd CCoommppuuttiinngg BBaassee ((TTCCBB))
        - The totality of protection mechanisms within a computer system
        -- including hardware, firmware, and software -- the combination
        of which is responsible for enforcing a security policy.  A TCB
        consists of one or more components that together enforce a
        unified security policy over a product or system.  The ability
        of a trusted computing base to correctly enforce a security
        policy depends solely on the mechanisms within the TCB and on
        the correct input by system administrative personnel of
        parameters (e.g., a user's clearance) related to the security
        policy.


     TTrruusstteedd PPaatthh
        - A mechanism by which a person at a terminal can communicate
        directly with the Trusted Computing Base.  This mechanism can
        only be activated by the person or the Trusted Computing Base
        and cannot be imitated by untrusted software.


     TTrruusstteedd SSooffttwwaarree
        - The software portion of a Trusted Computing Base.


     UUsseerr
        - Any person who interacts directly with a computer system.


     VVeerriiffiiccaattiioonn
        - The process of comparing two levels of system specification
        for proper correspondence (e.g., security policy model with top-
        level specification, TLS with source code, or source code with
        object code).  This process may or may not be automated.


     WWrriittee
        - A fundamental operation that results only in the flow of
        information from a subject to an object.


     WWrriittee AAcccceessss
        - Permission to write an object.


















  66..  RREEFFEERREENNCCEESS


  1.  Anderson, J. P.  Computer Security Technology Planning Study, ESD-
  TR-73-51, vol. I, ESD/AFSC, Hanscom AFB, Bedford, Mass., October 1972
  (NTIS AD-758 206).



  2.  Bell, D. E. and LaPadula, L. J.  Secure Computer Systems: Unified
  Exposition and Multics Interpretation, MTR-2997 Rev. 1, MITRE Corp.,
  Bedford, Mass., March 1976.



  3.  Brand, S. L.  "An Approach to Identification and Audit of
  Vulnerabilities and Control in Application Systems," in Audit and
  Evaluation of Computer Security II: System Vulnerabilities and
  Controls, Z. Ruthberg, ed., NBS Special Publication #500-57, MD78733,
  April 1980.



  4.  Brand, S. L.  "Data Processing and A-123," in Proceedings of the
  Computer Performance Evaluation User's Group 18th Meeting, C. B.
  Wilson, ed., NBS Special Publication #500-95, October 1982.



  5.  DCID l/l6, Security of Foreign Intelligence in Automated Data
  Processing Systems and Networks (U), 4 January l983.



  6.  DIAM 50-4, Security of Compartmented Computer Operations (U), 24
  June l980.



  7.  Denning, D. E.  "A Lattice Model of Secure Information Flow," in
  Communications of the ACM, vol. 19, no. 5 (May 1976), pp. 236-243.



  8.  Denning, D. E.  Secure Information Flow in Computer Systems, Ph.D.
  dissertation, Purdue Univ., West Lafayette, Ind., May 1975.



  9.  DoD Directive 5000.29, Management of Computer Resources in Major
  Defense Systems, 26 April l976.



  10.  DoD 5200.1-R, Information Security Program Regulation, August
  1982.



  11.  DoD Directive 5200.28, Security Requirements for Automatic Data
  Processing (ADP) Systems, revised April 1978.



  12.  DoD 5200.28-M, ADP Security Manual -- Techniques and Procedures
  for Implementing, Deactivating, Testing, and Evaluating Secure
  Resource-Sharing ADP Systems, revised June 1979.



  13. DoD Directive 5215.1, Computer Security Evaluation Center, 25
  October 1982.



  14. DoD 5220.22-M, Industrial Security Manual for Safeguarding
  Classified Information, March 1984.



  15. DoD 5220.22-R, Industrial Security Regulation, February 1984.



  16. DoD Directive 5400.11, Department of Defense Privacy Program, 9
  June 1982.



  17. DoD Directive 7920.1, Life Cycle Management of Automated
  Information Systems (AIS), 17 October 1978



  18. Executive Order 12356, National Security Information, 6 April
  1982.



  19. Faurer, L. D.  "Keeping the Secrets Secret," in Government Data
  Systems, November - December 1981, pp. 14-17.



  20. Federal Information Processing Standards Publication (FIPS PUB)
  39, Glossary for Computer Systems Security, 15 February 1976.



  21. Federal Information Processing Standards Publication (FIPS PUB)
  73, Guidelines for Security of Computer Applications, 30 June 1980.



  22. Federal Information Processing Standards Publication (FIPS PUB)
  102, Guideline for Computer Security Certification and Accreditation.



  23. Lampson, B. W.  "A Note on the Confinement Problem," in
  Communications of the ACM, vol. 16, no. 10 (October 1973), pp.
  613-615.



  24. Lee, T. M. P., et al.  "Processors, Operating Systems and Nearby
  Peripherals: A Consensus Report," in Audit and Evaluation of Computer
  Security II: System Vulnerabilities and Controls, Z. Ruthberg, ed.,
  NBS Special Publication #500-57, MD78733, April 1980.



  25. Lipner, S. B.  A Comment on the Confinement Problem, MITRE Corp.,
  Bedford, Mass.



  26. Millen, J. K.  "An Example of a Formal Flow Violation," in
  Proceedings of the IEEE Computer Society 2nd International Computer
  Software and Applications Conference, November 1978, pp. 204-208.



  27. Millen, J. K.  "Security Kernel Validation in Practice," in
  Communications of the ACM, vol. 19, no. 5 (May 1976), pp. 243-250.



  28. Nibaldi, G. H.  Proposed Technical Evaluation Criteria for Trusted
  Computer Systems, MITRE Corp., Bedford, Mass., M79-225, AD-A108-832,
  25 October 1979.



  29. Nibaldi, G. H.  Specification of A Trusted Computing Base, (TCB),
  MITRE Corp., Bedford, Mass., M79-228, AD-A108- 831, 30 November 1979.



  30. OMB Circular A-71, Transmittal Memorandum No. 1, Security of
  Federal Automated Information Systems, 27 July 1978.



  31. OMB Circular A-123, Internal Control Systems, 5 November 1981.



  32. Ruthberg, Z. and McKenzie, R., eds.  Audit and Evaluation of
  Computer Security, in NBS Special Publication #500-19, October 1977.



  33. Schaefer, M., Linde, R. R., et al.  "Program Confinement in
  KVM/370," in Proceedings of the ACM National Conference, October 1977,
  Seattle.



  34. Schell, R. R.  "Security Kernels: A Methodical Design of System
  Security," in Technical Papers, USE Inc. Spring Conference, 5-9 March
  1979, pp. 245-250.



  35. Trotter, E. T. and Tasker, P. S.  Industry Trusted Computer
  Systems Evaluation Process, MITRE Corp., Bedford, Mass., MTR-3931, 1
  May 1980.



  36. Turn, R.  Trusted Computer Systems: Needs and Incentives for Use
  in government and Private Sector, (AD # A103399), Rand Corporation
  (R-28811-DR&E), June 1981.



  37. Walker, S. T.  "The Advent of Trusted Computer Operating Systems,"
  in National Computer Conference Proceedings, May 1980, pp. 655-665.



  38. Ware, W. H., ed., Security Controls for Computer Systems: Report
  of Defense Science Board Task Force on Computer Security, AD #
  A076617/0, Rand Corporation, Santa Monica, Calif., February 1970,
  reissued October 1979.


























































                            TTaabbllee ooff CCoonntteennttss


  1. FOREWORD  . . . . . . . . . . . . . . . . . . . . . . . . . . .   2
  2. ACKNOWLEDGEMENTS  . . . . . . . . . . . . . . . . . . . . . . .   3
  3. PREFACE . . . . . . . . . . . . . . . . . . . . . . . . . . . .   4
  4. INTRODUCTION  . . . . . . . . . . . . . . . . . . . . . . . . .   5
  4.1. Historical Perspective  . . . . . . . . . . . . . . . . . . .   5
  4.2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . .   6
  4.3. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . .   6
  4.4. Fundamental Computer Security Requirements  . . . . . . . . .   7
  4.4.1. Policy  . . . . . . . . . . . . . . . . . . . . . . . . . .   7
  4.4.2. Accountability  . . . . . . . . . . . . . . . . . . . . . .   8
  4.4.3. Assurance . . . . . . . . . . . . . . . . . . . . . . . . .   8
  4.5. Structure of the Document . . . . . . . . . . . . . . . . . .   9
  4.6. Structure of the Criteria . . . . . . . . . . . . . . . . . .   9
  5. GLOSSARY  . . . . . . . . . . . . . . . . . . . . . . . . . . .  11
  6. REFERENCES  . . . . . . . . . . . . . . . . . . . . . . . . . .  19
















































