About SISO University
 

SISO University (SISO U) is a training program aimed at M&S practitioners, which offers various levels of M&S courses to SISO members. This program spans multiple SIWs (not every tutorial will be offered at every workshop) to help SISO members develop a persistent body of knowledge. The courses are split into three levels: 100 - Overview, 200 - Deep Dive and 300 - Hands On Training. Courses are divided into 1.5 hour blocks, and are offered on Mondays of each SIW and may also be offered during the evening on other days. 

Those interested in obtaining Continuing Education credit for their participation may order a certificate when registering for a workshop.  One certificate for 2.0 Continuing Education Units (CEUs) will be granted per SIW.
Upcoming Workshop
 
Florida Mall Conference Center   
Orlando, Florida
8-12 September 2014
 
Schedule and updates
Monday
 0800-1000   STDs 101     DIS 101     HLA 101
  1030-1200   DSEEP          DIS 201      HLA 202
 1900-2100   M&S Resp    M&S Complex        

Course catalog: Overview courses

DIS 101 - DISTRIBUTED INTERACTIVE SIMULATION

Provides an overview of the 2010 version of IEEE 1278.1 DIS standard. Includes introductory background on DIS in general. Main take-away points: DIS is a viable distributed simulation standard with an active developers group. The standard has doubled in size since the 1995/1998 version, mainly with better explanations of its use. New features have been added for Directed Energy weapons, Information Operations, and general extensibility of the Protocol Data Units.

Prerequisite: Minimum technical background required for this tutorial. Familiarity with distributed real-time simulation of vehicles and weapon system platforms would be helpful. We'll start with some basic overview for those new to DIS.

VV&A 101 - VERIFICATION, VALIDATION, & ACCREDITATION

The processes of Verification, Validation, and Accreditation are foundational elements that underlie assessments of M&S credibility. Information derived from the VV&A processes is used to shape the understanding of how and where an M&S should be used and under what the constraints.

While VV&A is founded on basic software engineering principles, implementation is often constrained by resources, whether these resources be time, money, personnel, or information. This tutorial will introduce M&S Users, M&S Developers, and VV&A Practitioners to the key concepts associated with VV&A planning and implementation, the impacts and the drivers, and basic documentation requirements.

Prerequisite: A general understanding of modeling and simulation.

HIGH LEVEL ARCHITECTURE 101 – AN INTRODUCTION TO HLA

The High-Level Architecture (HLA) is the leading international standard for simulation interoperability. It originated in the defense communities but is increasingly used in other domains. This tutorial gives an introduction to the HLA standard. It describes the requirements for interoperability, flexibility, composability and reuse and how HLA meets them. The principles and terminology of an HLA federation is given including some real world examples. The following is then covered:

  • The HLA Object Model Template that is used for describing the data exchange between simulations. 
  • The HLA Interface Specification that describes the services that simulation can use for data exchange, synchronization and overall management. 
  • The HLA Rules that federates and federations must follow.

Finally, some practical information is given about where current implementations stand today, including COTS, GOTS and Open Source implementations. The continuous development of performance, robustness of the implementations as well as available tools is also described. Some advice is given on how to get started with HLA, including the use of the related process standard: DSEEP.

Prerequisite: Prerequisite: A general understanding of modeling and simulation.

DSEEP 101 - DISTRIBUTED SIMULATION ENGINEERING AND EXECUTION PROCESS

The Distributed Simulation Engineering and Execution Process (DSEEP, IEEE 1730) defines the processes and procedures that should be followed by users of distributed simulations to develop and execute their simulations. The DSEEP generalizes the Federation Development and Execution Process (FEDEP, IEEE 1516.3) to all distributed simulation environments and architectures, no longer focusing solely on the High Level Architecture (HLA). This tutorial provides the top level steps and supporting activities for the entire process. It also introduces and illustrates the inputs, recommended tasks, and outcomes of the activities. There will be a brief overview of the architecture-specific annexes for HLA, DIS, and TENA. Attendees will be introduced to the DSEEP Multi-Architecture Overlay (DMAO) Product Development Group (PDG) activity that is further extending the DSEEP to multi-architecture environments.

Prerequisite: A general understanding of modeling and simulation.

GM-VV 101 – An introduction to the GENERIC METHODOLOGY FOR VERIFICATION AND VALIDATION (GM-VV)

The Generic Methodology for Verification and Validation (GM-VV) is a new emerging SISO standard for V&V of M&S assets, and at the same time under consideration by NATO and individual national defence directorates to be incorporated as part of their M&S policies. The GM-VV can be tailored towards the V&V needs of any specific M&S application, technology and organization. The GM-VV provides a comprehensive goal-oriented reasoning network approach to efficiently develop evidence based arguments to justify why M&S assets are acceptable for a specific intended use, or not. This network supports M&S stakeholders in their risk-based decision-making process on the M&S asset development, employment and reuse.

The tutorial will provide the attendees with initial hands-on to support the implementation and execution of V&V within their M&S organization or projects using GM-VV. The tutorial is organized in the following four parts:

  • Part 1 provides the attendees a general introduction to the V&V within the M&S domain. It shows the relationship with V&V practices of common systems and software engineering, and who, where and how one can benefit from using GM-VV.
  • Part 2 introduces the attendees to the three technical frameworks of the GM-VV: 
    • Conceptual framework: provides a set of generic principles and concepts for V&V of M&S assets. 
    • Implementation framework: provides a set of generic V&V product, process and organizational building blocks 
    •  Tailoring framework: provides a set of approaches to develop tailored V&V solutions for M&S assets using the implementation framework building blocks. 
  • Part 3 presents a complete illustration of the GM-VV by means of a real-life case-study example from the training domain. 
  • Part 4 provides the attendees with some basic application guidance for GM-VV in the form of recommended practices, do’s and don’ts, tools and useful referential sources.

SOA/LVC 101 - EMPLOYING SERVICE ORIENTED ARCHITECTURE FOR LIVE-VIRTUAL CONSTRUCTIVE MULTI-ARCHITECTURE DISTRIBUTED SIMULATIONS

Building Live-Virtual-Constructive (LVC) multi-architecture distributed simulations upon Service Oriented-Architecture (SOA) has been demonstrated and studied, but has not been embraced in general by the community served by modeling and simulation (M&S), or LVC multi-architecture developers. This is due to the real and perceived up-front costs of employing a new technology to address compatibility issues that have traditionally been addressed with ad hoc gateways and bridges, one-of-a-kind database connectors, and other single-point design solutions.

While SOA will not directly address composability of multiple simulations, nor eliminate the need for gateways and bridges, it can be a critical component in integration and management. It also has the potential to provide rapid deployment of integrated M&S components.

The intent of this tutorial is to present a balanced view of the considerations for using SOA as an M&S architecture. The tutorial provides an overview of SOA concepts, the challenges of integrating LVC multi-architecture distributed simulations, an explanation of the benefits and barriers to developing/integrating multi-architecture distributed simulations into a SOA construct, and when and when not to attempt to use SOA as a long-range infrastructure for M&S integration.

  • Introduction to SOA Concepts 
  • Overview of LVC Multi-Architecture Distributed Simulations 
  • Execution of LVC Distributed Simulation in a SOA Construct Architecture 
  • Design Perspectives of SOA for M&S 
  • The Issues & Challenges, Benefits & Barriers 
  • Overview of the Current State of DoD SOA Services 
  • Recent Examples of Successes and Problems in using SOA for M&S 
  • When and When Not to Attempt to Use a SOA-Based M&S Architecture

Prerequisite: A general understanding of modeling and simulation.

SEDRIS 101 - An Introduction to SEDRIS Fundamentals

Environmental data is an increasingly integral part of many of today's information technology applications. The methods and techniques for generating, representing, and sharing environmental data play a key role in the interoperation of heterogeneous systems that use such data. SEDRIS is a suite of technologies, standards, implementations, and tools that provide an integrated approach to the representation and interchange of environmental data.

This tutorial highlights the role and importance of standards in the representation, interchange, and reuse of environmental data, and gives an overview of the fundamental concepts and components of SEDRIS. The presentation will touch on how the SEDRIS technology components are used in various applications and in the interchange of environmental data, and will provide an overview of the SEDRIS ISO/IEC standards and their corresponding on-line registries. A brief overview of several key SEDRIS-based tools and utilities will also be included.

Prerequisites: Familiarity with environmental data and concepts, and a fundamental understanding of how models and simulations use and process environmental data.

OPEN UTF 101 - INTRODUCTION TO THE OPEN UNIFIED TECHNICAL FRAMEWORK

The Open Unified Technical Framework (OpenUTF) is comprised of three synergistic architectures that are designed to support parallel and distributed computing within a standards-based interoperability framework. The OpenUTF provides a common infrastructure for hosting plug-and-play model and/or service components that are distributed across processors and are able to mutually interact in abstract time (I.e., scaled time or logical time). Because the same framework can be used for operational services and models, the OpenUTF has the potential to unify M&S, service-oriented applications, and T&E.

These architectures are:

  1. Open Modeling and Simulation Architecture (OpenMSA) is a layered architecture, where each layer represents a critical technology for supporting interoperability standards and modern computing on networks of multicore computers. Each of these layers are being investigated and prepared for eventual standardization by the SISO Parallel and Distributed Modeling & Simulation Standing Study Group (PDMS-SSG).
  2. Open System Architecture for Modeling and Simulation (OSAMS) is a subset of the OpenMSA. It focuses on modeling constructs and plug-and-play software composability. The goal of OSAMS is to provide standard services that minimize software development efforts while promoting interoperability and reuse of model/service components. Models developed according to the eventual OSAMS standard would be interoperable within any OSAMS-compliant simulation engine.
  3. Open Cognitive Architecture Framework (OpenCAF) extends OSAMS with special modeling constructs for representing intelligent behavior. This includes a reasoning engine that is able to support rule-based, emotion-based, and training-based thought processes that are triggered by external stimulus, along with goal-oriented task management. OpenCAF is necessary to model behaviors of intelligent entities.

This tutorial will provide an introduction to the architectures of the OpenUTF and will especially help members of SISO participate in the PDMS-SSG.

Prerequisite: A general understanding of modeling and simulation.

PDES 101 - TECHNOLOGY OF PARALLEL DISCRETE EVENT SIMULATION

The multicore computing revolution has begun and will change how software is designed, developed, tested, validated, fielded, and maintained. Supporting M&S in parallel and distributed multicore computing environments offers extreme challenges, especially when modeled entities freely interact with each other at any time and/or time scale. This tutorial introduces the core techniques for parallel discrete event simulation (PDES) that have been researched and developed over the past 25 years. These techniques include:

  1. Lock-time stepping
  2. Fixed time windows
  3. Topology-based synchronization
  4. Rollback-based optimistic approaches
  5. The event horizon and risk free optimistic approaches
  6. Hybrid approaches that introduce flow control
  7. Robust repeatability using abstract time representations
  8. Five dimensional simulation

By the end of this tutorial, participants will understand the challenges and general techniques that are used to support PDES. This tutorial will especially help members of SISO understand the core PDES technologies of the OpenUTF, which will enhance participation in the Parallel and Distributed Modeling & Simulation Standing Study Group (PDMS-SSG).

Prerequisite: A general understanding of modeling and simulation.

SISO STANDARDS 101 - AN INTRODUCTION TO THE SISO STANDARDS DEVELOPMENT PROCESS

Hosted by leaders of the SISO Standards Activity Committee.  Explains how to become involved in the SISO standards development and support process.  No tutorial fee is charged for this session.

Prerequisite: A general understanding of modeling and simulation.

SPACE Smackdown 101 

At the Spring 2011 Simulation Interoperability Workshop (SIW), the Space Forum sponsored a "Space Smackdown" event which sought to increase awareness of HLA in the academic simulation community.  It was an outreach of sorts, an effort to expand the HLA community beyond its current base.

The event involved teams from several universities. Each team built one or more space vehicle federates which joined a simulated Earth-Moon federation governed by a single HLA-Evolved object model (FOM). Although the scope of the simulation was modest, its objectives of introducing a new community to HLA were quite successfully met.

This tutorial will discuss the Space Smackdown with a particular focus on the upcoming smackdown event at the Spring 2012 SIW.  We will cover the following topics:

  • An introduction to the motivations behind the Space Smackdown,
  • A review of the scenario that was simulated in the 2011 smackdown, including the HLA-Evolved FOM modules that the federates adhered to,
  • A summary of the highlights of the 2011 event, including the technology challenges, overall results and lessons learned, and
  • A discussion of current plans for the 2012 event, including a brainstorming discussion of possible mission scenarios.

The tutorial is intended for people curious about or interested in participating in the upcoming 2012 smackdown. We will cover the prerequisites for teams wishing to participate, and we will let people know how to get involved.

Prerequisite: A general understanding of modeling and simulation.

The Certified M&S Professional Program (www.simprofessional.org)

The recognition of M&S as a profession and its practitioners as professionals requires several things. Key among them is the Certified Modeling and Simulation Professional Development Program (CMSP), which is administered by the Modeling and Simulation Professional Certificate Council. This program, co-sponsored by SISO, SCS, and NTSA, has been in existence since 2008, but has just undergone a refresh to ensure test content is current, the processes are clear, and the program is credible. The purpose of this course is to describe the program, its role in the M&S profession, its benefits to certificate holders and to the community, and to provide a test preparation class. The class provides a top level overview of the broad range of subject matter covered by the examination, and includes discussion of sample questions drawn from the actual question bank. The ongoing validation and improvement process will be emphasized.

At the completion of the class the student will be able to describe the CMSP program including the requirements, application process, testing process, certificate renewal process, and the two types of certificate (Technical / Developer and Manager / User). A certificate for 1 CEU (Continuing Education Unit) will be awarded.

Prerequisite: The desire and ability to be recognized as a professional M&S practitioner

TENA and JMETC

The Test and Training Enabling Architecture (TENA) provides an advanced set of interoperability software and interfaces for use in joint distributed testing and training. The TENA software includes the TENA Middleware, a high-performance, real-time, low-latency communication infrastructure used by training range instrumentation software and tools during execution of a range training event. The standard TENA Object Model provides data definitions for common range entities and thus enables semantic interoperability among training range applications. The TENA tools, utilities, and gateways assist in creating and managing an integration of range resources. The current version of the TENA Middleware, Release 6.0.2, is being used by the range community for testing, training, evaluation, and feedback and will be used in major exercises in the future.

The Joint Mission Environment Test Capability (JMETC) program is chartered to create a persistent test and evaluation capability throughout the US DOD. JMETC consists of a persistent network; a set of TENA-compliant software middleware, interfaces, tools, and databases; and a process for creating large distributed test events. The combination of TENA and JMETC gives testers and trainers unprecedented power to craft a joint distributed mission environment that meets testing and training requirements for the warfighter.

Prerequisite: A general understanding of modeling and simulation and an interest in testing and training.

MURM 101 - M&S Use Risk Methodology

Use of model(s), simulation(s) and the associated data (hereafter referred to as M&S) in development of scientific and technical knowledge; analysis of problems; design, development, and assessment of systems; and support of system operations continues to increase, as does the role that M&S plays in these activities. Hence, it becomes increasingly important to know how much confidence should be placed in M&S results, and what their limits of credibility may be. At present, M&S may be developed and used without a comprehensive appreciation for the uncertainties associated with the M&S and the M&S results. This tutorial presents a mathematically coherent methodology for the assessment of M&S Use Risk that provides an explicit relationship between V&V activities and the risk associated with using M&S results. The methodology is flexible in that it can be employed throughout the M&S lifecycle. By employing Claude E. Shannon's maximum information entropy concept, the methodology helps to preclude unintended bias and allows full use of all available information.

Prerequisite: A general understanding of modeling and simulation and an interest in verification and validation.

Gateways 101 - An Introduction to Gateways in Multi-Architecture Environments

In the distributed simulation world, gateways remain a significant interoperability enabler, particularly in multi-architecture applications which are often used to build Live/Virtual/Constructive (LVC) environments.

This tutorial will serve as an introduction to gateways. Primary learning objectives are:

  • To understand the need/role of gateways in distributed simulation (to better enable interoperability).
  • To understand how gateways operate in a distributed simulation environment.
  • To understand the types of distributed simulation architectures typically involved in Live-Virtual-Constructive (LVC) environments.
  • To understand some of the issues surrounding gateways.
  • To understand how gateways are acquired today.
  • To understand the requirements for configuring and using gateways.
  • To be aware of ongoing work to address some of the issues surrounding gateways.

Prerequisite: A general understanding of modeling and simulation.

Course catalog: Deep dive courses

VV&A 201: KEY DRIVERS TO EFFICIENT VV&A IMPLEMENTATION

The objective of this tutorial is to provide those interested in the planning and implementation of VV&A with guidance on how to address key implementation issues and challenges. Topics to be covered include:

  • How requirements traceability enhances the VV&A processes
  • How to derive "acceptable" acceptability criteria
  • How risk-based tailoring can impact VV&A planning and implementation
  • How to manage and document the V&V test process
  • How to use MIL-STD 3022 (Documentation of VV&A for M&S)

Prerequisites: General knowledge about the purpose and principles of VV&A corresponding to VV&A 101. 


Gateways 201 - Technical Issues with Gateway Selection and Configuration

The Gateways 201 tutorial is designed to build upon the introductory Gateways Tutorial by diving deeper into the technical issues associated with gateway selection and configuration along with potential solutions to those issues.  The tutorial will begin with a brief review of the types of problems encountered by gateway users today and then examine ongoing work within the LVC Architecture Roadmap Implementation (LVCAR-I) and SISO related to gateways.

 Primary learning objectives are:

  • To understand the role of gateways in distributed simulation environments
  • To better understand the problems related to employing gateways
  • To better understand potential solutions to these problems
  • To understand current work within SISO (GDACL PDG) to define a set of standardized gateway languages to assist gateway users:  

                 - Gateway Description Language
                 - SML – SDEM Mapping Language
                 - GFL – Gateway Filtering Language

  • Understand how gateway performance metrics are being addressed.

 Prerequisite: A general understanding of modeling and simulation.

 

HIGH LEVEL ARCHITECTURE 201 — HLA Evolved — an Overview

This tutorial gives an overview of the new features of HLA Evolved (IEEE 1516-2010) which is a superset of the previous HLA 1516-2000 standard. It describes the new functionality and what new capabilities it provides to federations. It also gives an overview of the open standardization process behind this new version. Some key new features include Modular FOMs, extended XML features, Fault Tolerance, Dynamic Link Compatibility, Encoding helpers, Web Services and Smart Update Rate Reduction.

Finally some approaches for migrating existing federations to HLA 1516-2010 are given, including notes on tool support. An extensive list of in-depth reading is also provided.

Note that a short overview of FOM Modules is included but participants may choose to attend HLA 202 for a detailed walkthrough.

Prerequisites: General knowledge about the purpose and principles of HLA and HLA 101.

HIGH LEVEL ARCHITECTURE 202 - HLA Evolved FOM Modules

One of the new features of HLA that has attracted a lot of interest is FOM Modules. These facilitate modular specification and reuse of particular aspects of an HLA federation. One example would be to put vehicles, weather, sensor and federation management aspects in different modules. FOM modules can then be maintained and reused independently, within and between federations and organizations.

This tutorial first provides a recap of FOMs and some best-practices. It then describes the principles of FOM Modules, how they are used in a federation and how they are combined. Best practices of designing FOM modules are given. It also describes some examples ranging from introductory examples to the FOM Modules of the NATO Training Network (as developed by the NATO MSG-068 group) and sample Space FOM Modules from NASA. Finally, some practical advice on developing FOM modules is given and some tools are described and demonstrated.

Prerequisites: General knowledge about the purpose and principles of HLA corresponding to HLA 101 (but not necessarily 201). Some experience of HLA object model development is useful but not required.

SEDRIS 201 - Using SEDRIS Software and Tools

A fundamental objective in SEDRIS is the representation of complex environmental data and the seamless interchange of environmental data sets. This tutorial is geared toward software developers and environmental data modelers seeking an overview of the SEDRIS software development kits (SDK) and associated tools for accessing, inspecting, and manipulating environmental data.

The tutorial will show how the SEDRIS SDK is used to create applications and libraries that can read and write SEDRIS transmittals. The presentation will demonstrate the process of obtaining and setting up the appropriate SEDRIS component SDKs, depending on the needs of the application.

The tutorial will also show how the SEDRIS tools are used to convert and integrate databases to/from such data formats as Shapefile, GeoTIFF, CTDB, and others. The presentation will cover aspects of verifying the SEDRIS transmittals for conformance to the syntax and rules of the SEDRIS data representation model (DRM), and steps for creating and integrating databases using the Focus tool.

Prerequisites: General knowledge of SEDRIS concepts and components, familiarity with software development and use in environmental data generation and consumption.

DIS 201 New Extensibility and Dead Reckoning Features in DIS Version 7

DIS Version 7, the new version IEEE 1278.1, contains many new features and improvements over the 1995 and 1998 DIS standards. The DIS 201 tutorial will take an in-depth review of two features: PDU extensibility and improvements in dead reckoning.

PDU extensibility expands the ability of DIS users to add custom data to PDUs. Some PDUs allow user-defined records to be directly added. Other PDUs can be extended using the new Attribute PDU. Both methods retain compatibility with older versions of DIS. This allows customized PDUs to be added in new or upgraded simulations while maintaining interoperability to older simulations that cannot be modified.

Dead reckoning has been enhanced in DIS Version 7, mainly in the extrapolation of entity orientation. A new geometric method of determining the orientation threshold is described using either quaternions or rotation matrices. This method avoids the problems of Euler angle singularities than can cause excessively high PDU transmit rates. Other new features speed up dead reckoning calculations in receiving simulations by adding extra information in the Entity State PDU. These new features maintain full backward and forward compatibility with DIS Version 5 and 6.

Prerequisite: Familiarity with basic DIS PDU usage. Knowledge of dead reckoning is helpful but the tutorial will include an introduction to the concepts.

COURSE CATALOG: HANDS-ON COURSES

OpenUTF - Hands-on Training for the Open Technical Framework

This hands-on course includes seven 1.5-hour sessions over three days. Participants are strongly encouraged to participate in all sessions. See requirements below regarding license agreements and participation of non-US citizens.

The OpenUTF is an emerging framework for hosting next-generation, composable, scalable, parallel and distributed, M&S systems. It is comprised of three synergistic architectures that are being investigated, refined, and prepared for future standardization by the PDMS-SSG.

  1. Open Modeling and Simulation Architecture (OpenMSA) is a layered architecture, where each layer represents a critical technology for supporting interoperability standards and modern parallel and distributed computing on networks of multicore computers.
  2. Open System Architecture for Modeling and Simulation (OSAMS) is a subset of the OpenMSA. It focuses on modeling constructs that are designed to support plug-and-play software composability. OSAMS provides a programming framework that minimizes software development efforts while promoting interoperability and reuse of plug-and-play model/service components.
  3. Open Cognitive Architecture Framework (OpenCAF) extends OSAMS with modeling constructs for representing intelligent behavior. This includes a reasoning engine that is able to support rule-based, emotion-based, and training-based thought processes, along with goal-oriented task management.

The WarpIV Kernel provides the open-source reference implementation of the OpenUTF core infrastructure and is made freely available to all qualifying United States and Canadian organizations for non-commercial use. Non-U.S. citizens are permitted to participate in this training event, but would require an export license to receive a copy of the software. This training event will utilize the WarpIV Kernel in its hands-on assignments; each designed to guide participants through the primary modeling constructs of the OpenUTF. All training materials, including a quick reference guide and a set of assignment worksheets, will be provided to participants. By the end of this training event, participants will become familiar with the OpenUTF modeling constructs and be able to develop parallel and distributed simulations on their own.

Prerequisites: No parallel or distributed computing experience is required for this training event. However, participants should be somewhat familiar with C++ and/or basic programming concepts. The class will be broken up into small teams of 3-4 participants for the hands-on assignments, with at least one strong lead programmer per team. The instructor will guide the teams through each hands-on exercise. The instructor will provide laptops for use by participants as available. U.S. and Canadian participants wishing to use their own laptops must apply for and receive a software license prior to the training.