4.6. Adapters

There are two kind of Adapters in MQC.

  • File Readers

    Data and Findings are extracted from report files.

  • API Connectors

    Report files are downloaded to be read by File Readers and/or Data and Findings are directly fetched from an API.

Adapters can be enabled or disabled in the adapters dialog.

Only enabled API Connectors are available for selection in the API-Tab of the data location dialog. By disabling unnecessary file readers the speed of importing a lot of files can be improved.

The allowed file extensions for the data locations depend on the enabled file readers.

../../_images/MQC_ConfigAdapters_Manage.png

Figure 4.76 Enabling or disabling adapters in the adapter dialog

Not all available adapters are shown in the dialog. To add a special adapter click on Add and then Special to add the needed adapter.

../../_images/MQC_ConfigAdapters_AddSpecialAdapter.png

Figure 4.77 Adding a special adapter

Additionally to the functionality of managing the adapters, the adapters dialog provides a button to view the execution order of the file readers.

../../_images/MQC_ConfigAdapters_ExecutionOrder.png

Figure 4.78 Execution order of all enabled file readers. The file readers are executed in this order, by first checking if the file extension(s) match the file that is to be imported and then let the file reader validate itself, if the provided file is to be handled by it.

With the button Test importing a Report File a single file can be tested for import. If there are any errors while checking the file reader and executing the correct file reader, the error messages are shown to the user, else the data is shown in a result table.

4.6.1. File Readers

The following tools are supported by MQC with file readers:

CSV files can be imported with the Generic Data Sheet File Reader by using Adapter Options.

If you want to import manually collected data use the provided Excel Template and let the Manual data import (Excel Template) import it into MQC.

4.6.1.1. MES Model Examiner® (MXAM)

MQC supports two types of MXAM report formats:

4.6.1.1.1. MXMR

The following example of an MXMR report describes which information is imported by MQC’s MXAM file reader to MQC:

  • from the <RReport> element, specifically the date element: ReportDateTime

  • from the first <subcomponents> element inside an <artifacts> element, the attribute name as ArtifactName and the attribute path as ArtifactPath

MXAM provides guideline and finding data for each artifact in the MXMR Report. Therefore, each artifact section will be parsed to get the information of all findings and guideline states.

<artifacts result="Aborted" adapterId="com.modelengineers.mxam.tooladapter.matlab" storageNature="Tool artifact">
  <structure>
    <subComponents name="ExPol" path="ExPol">
    </subComponents>
  </structure><summary itemType="Findings">
    <statistic resultType="Review" count="3"/>
    <statistic resultType="Failed" count="66"/>
    <statistic resultType="Info" count="28"/>
    <statistic resultType="Passed" count="12"/>
    <statistic resultType="Ignored" count="2"/>
  </summary><summaries itemType="Guidelines">
    <statistic resultType="Review" count="1"/>
    <statistic resultType="Failed" count="5"/>
    <statistic resultType="Passed with Infos" count="24"/>
    <statistic resultType="Passed" count="11"/>
  </summaries>
</artifacts>

The file reader reads those measures to be found in the Findings and Guidelines header: <summary itemType="Findings"> and <summaries itemType="Guidelines">. Please note that an MXAM report can contain various artifacts and for each artifact MQC reads out the Findings and Guidelines Summary, that are saved as Finding\Result and Guideline\Result, respectively:

  • Review

  • Failed

  • Info (for Finding\Result) and Passed with Infos (for Guideline\Result)

  • Passed

  • Ignored

  • Aborted

  • Canceled

  • Repaired

  • Unrepaired

  • Warning (for Finding\Result) and Warnings (for Guideline\Result)

The file reader also reads the Model Architecture chapter data separately with ModelArchitecture as MeasurementName. For this, the file reader extracts all findings related to this chapter and aggregates the finding states from the result attribute for each artifact to define the Finding\Result measure. To get the Guideline\Result measure, the file reader determines the worst finding for each artifact and guideline (e.g. MXRAY_COMPLEXITY_LOCAL) and aggregates again equal results.

<findings xsi:type="MatlabReport:RMatlabFinding" result="Passed" path="ExPol" name="ExPol" qualifier="Model"
          checkTreePath="mes_guidelines_embedded_coder_fs/Model Architecture/mes_arch_1301/matlab_mxray_1301"
          check="//@project/@documents.0/@chapters.1/@guidelines.0/@checks.0" artifact="//@artifacts.0"
          artifactStructureComponent="//@artifacts.0/@structure/@subComponents.0" ignoreComment=""
          parentPath="ExPol" repairInfo="">
    <properties key="Mask type" value="" visible="false"/>
    <properties key="Block type" value="" visible="false"/>
    <properties key="MessageParameter" value="[local complexity, 22]" visible="false"/>
    <properties key="metric" value="MXRAY_COMPLEXITY_LOCAL" visible="false"/>
    <message messageId="BoundCheck_GOOD" messageText="The local complexity is 22.">
        <messageParameter>local complexity</messageParameter>
        <messageParameter>22</messageParameter>
    </message>
    <linkAction>
        <properties key="label" value="Open Model" visible="true"/>
        <properties key="link" value="matlab:open_system('EV3Control_demo_ec');" visible="true"/>
    </linkAction>
    <elementIdentifier xsi:type="XMatlab:XMatlabElementIdentifier"
                       elementIdentifier="ExPol,&#xA;ExPol,&#xA;Model,&#xA;BoundCheck_GOOD:
                                          The local complexity is 22."
     path="ExPol" name="ExPol" qualifier="Model" artifactName="ExPol" version="5.1.0.xMessage" sid="">
        <message messageId="BoundCheck_GOOD" messageText="The local complexity is 22.">
            <messageParameter>local complexity</messageParameter>
            <messageParameter>22</messageParameter>
        </message>
    </elementIdentifier>
</findings>

MQC reads and shows the Model Architecture results separately, it is not contained in the overall Guideline\Result and Finding\Result measure values.

MQC extracts all findings, generates a separate Data Detail for each finding and reads:

  • Check as Subject

  • the value of check attribute, which is used to fetch the path of the check inside the guideline document tree the finding belongs to, and stores it as Subject Path

  • path attribute of findings element as Artifact Inner Path

  • Result as Measure

  • result attribute of findings element as State

  • messageText or text attribute of message element as Issue Message

4.6.1.1.2. Excel

MQC accepts an Excel file as a valid MXAM report file, if it at least contains the Project Overview sheet. All guidelines and findings are read then from the Findings sheet taking into account that multiple Findings sheets may be present (e.g. in addition Findings 2, Findings 3 etc).

MQC reads

  • from the Project Overview sheet:

    • the ReportDateTime as stored in the row that contains the string “Generated at:”

  • from the Findings sheets EACH row as an MXAM finding, where:

    • the Check ID column is used to extract the guideline name

    • the Objectives column is used to extract the MeasurementName

    • the Artifact column is read as ArtifactName

    • the Result column is read as VariableName.

To extract the guideline name from the entry of the Check ID column, MQC splits the given path into its parts, where the last part is the check and the part next to the last indicates the guideline. If for example the column contains the string

mes_first_set_modeling_guidelines_fs/Layout and Design/Modeling of Data Flow/ misra_slsf_030_abc/mcheck_misra_slsf_030_ab

  • mes_first_set_modeling_guidelines_fs is the name of the document

  • Layout and Design is a chapter inside the document

  • Modeling of Data Flow is a subsection of the above chapter

  • misra_slsf_030_abc is the guideline and

  • mcheck_misra_slsf_030_ab is the check.

If the column Objectives is not existing, MQC instead checks for the column name Check Type to extract the measurement name. If multiple entries are contained, MQC uses the first entry as measurement name.

If the MXAM Excel report contains findings for different subsystems for the same artifact, MQC tries to extract a common prefix of all artifact paths as ArtifactName.

After reading all rows, the extracted findings (each row equals one MXAM finding) are aggregated and stored as MXAM measures:

  • the sum of all findings with the same measurement name, the same artifact name and the same variable name is stored as MeasurementName.FindingCount.VariableName = RowCount, e.g. Functionality.FindingCount.Passed = 212.

To extract the GuidelineCount measure:

  • MQC first groups all findings according to the same measurement name, the same artifact name and the same guideline name

  • then for each group MQC takes the variable name of the worst finding as guideline result, e.g. Failed

  • and afterwards counts the occurrence of findings with the same measurement name, the same artifact name and the same variable name as MeasurementName.GuidelineCount.VariableName, e.g. Functionality.GuidelineCount.Failed = 15.

4.6.1.2. MES M-XRAY® (MXRAY)

MQC supports the Standard XML MXRAY Report file.

The following information is extracted by the MQC/ MXRAY file reader:

  • from the <Timestamp> element

    • ReportDateTime

  • from the <SubsystemQualityOverview> header all included elements, usually these are:

    • Local Complexity

    • Level

    • %Elementary Inputs Unused (globally)

    • Cyclomatic Complexity

    • Inports

    • Outports

    For each of these Measures the variables Good, Acceptable and Bad are imported. Furthermore for Local Complexity the variables LowerBoundOfAcceptable and LowerBoundOfBad are read .

  • from the GlobalValueSummary element

    • Global Complexity (Ref0)

    • Global Complexity (Ref1)

    • Global Complexity (RefN)

    • Global Complexity Stateflow (Ref0)

    • Global Complexity Stateflow (Ref1)

    • Global Complexity Stateflow (RefN)

  • from the CloneGroups element

    • NumberOfDetectedCloneGroups

    • NumberOfSubsystemsAnalyzed

    • NumberOfUniqueSubsystemsInAllCloneGroups

    • NumberOfSubsystemsInAllCloneGroups

    In MQC CloneGroups.NumberOfUniqueSubsystemsInAllCloneGroups is shown as CloneGroups.Bad:

    Bad = NumberOfUniqueSubsystemsInAllCloneGroups

    Additionally CloneGroups.NumberOfSubsystemsAnalyzed is used together with CloneGroups.NumberOfUniqueSubsystemsInAllCloneGroups to calculate a value for CloneGroups.Good:

    Good = NumberOfSubsystemsAnalyzed - NumberOfUniqueSubsystemsInAllCloneGroups

4.6.1.3. MES Test Manager® (MTest)

The MQC-MTest file reader supports the HTML format for MTest Report files.

MQC expects a test catalog html file, where the file name starts with “Testcatalog_”. In addition, it reads data from the assessment catalog as well as from the requirement catalog if located in the same folder as the test catalog.

Extracting the data from a zip archive is also supported. Then the archive name must start with “Testcatalog_” as well.

The following information is extracted by the file reader:

From the <table class="infoHeaderTable"> table MQC extracts the value of Generated at: as ReportDateTime.

<body>
  <h1>MTest Sequence Catalog: MTest Demo Autopilot_ec Batch Project</h1>
  <div class="infoHeader">
    <table class="infoHeaderTable">
      <tr><td>MTest Project: </td><td>MTest Demo Autopilot_ec Batch Project</td></tr>
      <tr><td>Project Author: </td><td>MTest Demo User</td></tr>
      <tr><td>Generated at: </td><td>2023-04-12 21:20</td></tr>
    </table>
  </div>

For each catalog file the artifact names are extracted from the list of test objects. Each artifact is a concatenated string using test model and test object values, e.g. “autopilot_demo_ec/Autopilot_Mode_Logic”.

<ol>
  <li>Test Model: <a href="...">autopilot_demo_ec</a>
    <ol>
      <li>Test Object: <a href="...">Autopilot_Mode_Logic</a>
        <ol>
          <li>Test Group: <a href="...">Test001 - Altitude mode logic functionality testing using MTCD (text-based)</a></li>
          <li>Test Group: <a href="...">Test002 - Altitude mode logic functionality testing using MTCD (Excel-based)</a></li>
          <li>Test Group: <a href="...">Test003 - Altitude mode logic functionality testing via Variation in MTCD (text-based)</a></li>
        </ol>
      </li>
      <li>Test Object: <a href="...">Glide_Slope_Control</a>
        <ol>
          <li>Test Group: <a href="...">Test001 - Glide slope controller functionality testing using MTCD (text-based)</a></li>
          <li>Test Group: <a href="...">Test002 - Glide slope controller functionality testing using MTCD (Excel-based)</a></li>
          <li>Test Group: <a href="...">Test003 - Glide slope controller functionality testing via variation test case in MTCD (text-based)</a></li>
        </ol>
      </li>
    </ol>
  </li>
</ol>

4.6.1.3.1. Test Catalog

MQC extracts all test case information from the third table of each test group section.

<h3>1.1.1 Test Group: Test001 - Altitude mode logic functionality testing using MTCD (text-based)</h3>
<table width="100%" class="contentCheck">...</table>
<table class="content" width="100%">...</table>
<table width="100%" border="1" cellpadding="5">
  <tr><th>Test Sequence</th><th>Linked Items</th><th>Test Sequence Description</th><th>Evaluation Results</th></tr>
  <tr>
    <td>
      <a id="Mod_autopilot_demo_ec_Autopilot_Mode_Logic_Test001_TSeq001"></a>
      <a href="...">01: TS_AP_ML_001</a></b>:<span class="passed">Passed</span><br />
      <em class="small">autopilot_demo_ec/<br />Autopilot_Mode_Logic/<br />Test001</em><br /><br />
      <b>Work Status:</b>reviewed
    </td>
    <td>
      <b>Requirements</b><br />
      <a class="normal" href="...">AP_ML_req_01</a><br /><br />
      <b>Assessments</b><br />
      <a class="normal" href="...">assmnt_AP_ML_req_01</a><br />
      <a class="normal" href="...">mars_AP_ML_req_01</a><br /><br />
    </td>
    <td>
      <b>Glideslope armed 01</b><br />
      Motivation<br />
      ...
    </td>
    <td>
      <b>Assessment:</b><span class="passed">Passed</span><br />
      <b>Regression:</b><span class="none">None</span><br />
      <b>Back2Back:</b><span class="passed">Passed</span><br />
      <b>evaluated by:</b><br />MTest Demo User (12-Apr-2023 21:05:01)<br />
    </td>
  </tr>
  <tr>...
</table>

MQC iterates over each row of the table and counts the number of the same results. Afterwards the measures of all test groups belonging to the same artifact (test object) are aggregated by sum.

  • For Testcase\Result the content of the first column is checked, e.g. <span class="passed">Passed</span>.

    The following numbers are collected:

    • Testcase\Result.Passed

    • Testcase\Result.Review

    • Testcase\Result.Failed

    • Testcase\Result.Error

    • Testcase\Result.Untriggered

    • Testcase\Result.Inactive

    • Testcase\Result.Unknown

    • Testcase\Result.None

    • Testcase\Result.Total

  • For Testcase\Work Status the content of the first column is checked, e.g. <b>Work Status:</b>reviewed.

    The following numbers are collected:

    • Testcase\Work Status.Undefined

    • Testcase\Work Status.Created

    • Testcase\Work Status.Reqchanged

    • Testcase\Work Status.Described

    • Testcase\Work Status.Inwork

    • Testcase\Work Status.Rejected

    • Testcase\Work Status.Completed

    • Testcase\Work Status.Reviewed

    • Testcase\Work Status.Total

  • For Testcase\Requirement Coverage, MQC checks, if the second column contains any requirements for a test case.

    The following numbers are collected:

    • Testcase\Requirement Coverage.Covered (number of testcases with linked requirements)

    • Testcase\Requirement Coverage.Uncovered (number of testcases not linked to any requirement)

    • Testcase\Requirement Coverage.Total (number of all listed testcases)

MQC extracts all structural coverage information from the first table of each test object section starting with the 8th row (previous rows contain the test case summary for the corresponding test object).

<h2>1.1 Test Object: Autopilot_Mode_Logic</h2>
<table width="100%" class="coverage">
  <tr><td colspan="8" class="tableabove">Test Sequence State</td></tr>
  <tr>...</tr>
  <tr>...</tr>
  <tr>...</tr>
  <tr>...</tr>
  <tr><td colspan="8" class="tableabove">Structural Coverage</td></tr>
  <tr><th>Tool</th><th>Metric</th><th colspan="2">Percentage</th><th>Target</th><th>Total</th><th>Reached</th><th>Unreached</th></tr>
  <tr>
    <td rowspan="3"><a href="...">Model Coverage</a></td>
    <td>Condition</td>
    <td>96.4%</td>
    <td><table class="metricbar">...</table></td>
    <td>80%</td>
    <td>28</td>
    <td>27</td>
    <td>1</td>
  </tr>
  <tr>
    <td>Decision</td>
    <td>81.2%</td>
    <td>...
  </tr>
</table>

The following numbers are collected from Reached, Unreached and Total columns, each with the variables Covered, Uncovered and Total:

  • Model Coverage\Condition

  • Model Coverage\Decision

  • Model Coverage\Statement

  • Model Coverage\Lookup

  • Code Coverage\Condition

  • Code Coverage\Decision

  • Code Coverage\Statement

4.6.1.3.2. Assessment Catalog

MQC extracts all assessment information from the third table of each test object section.

<h2>1.1 Test Object: Autopilot_Mode_Logic</h2>
<table width="100%" class="coverage">
<table width="100%" class="coverage">
<table width="100%" border="1" cellpadding="5">
  <tr><th>Assessment</th><th>Linked Items</th><th>Assessment Description</th><th width="60%"> Test Sequence Results</th></tr>
  <tr>
    <td>
      <a id="Mod_autopilot_demo_ec_Autopilot_Mode_Logic_assmnt_AP_ML_req_01"></a><b>
      <a href="...">assmnt_AP_ML_req_01</a> :</b>
      <span class="failed">Failed</span><br />
      <em class="normal">(Autopilot_Mode_Logic)<br /></em><br />
      ...
      <b>Scope:</b>      global<br /><br />
      <b>Work Status:</b>reviewed<br />
    </td>
    <td>
      <b>Requirements</b><br />
      <a href="...">AP_ML_req_01</a><br /><br />
      <b>Test Sequences</b><br />
      <a href="...">01: TS_AP_ML_001</a><br />
      <a href="...">02: TS_AP_ML_002</a><br />
      ...
    </td>
    <td>
      <b>Check state Armed</b><br />Checks if Armed is true when Glide Slope Enable and Glide Slope Signal are both true.
    </td>
    <td>
      <details>
        <summary><b>Test Sequences:</b><span class="failed">Failed</span></summary>
        <div><br />
          <b>Altitude mode logic functionality testing using MTCD (text-based) (Test001)</b><br />
          <span class="failed"></span><a href="...">3: TS_AP_ML_003 - Glideslope coupled 01</a>: NOT ok. Not Armed when Glide Slope Enable and Glide Slope Signal are both true.<br />
          <span class="failed"></span><a href="...">4: TS_AP_ML_004 - Glideslope coupled 02</a>:&nbsp; NOT ok. Not Armed when Glide Slope Enable and Glide Slope Signal are both true.<br />
          ...
        </div>
      </details><br />
      <details>
        <summary><b>Test Sequences:</b><span class="passed">Passed</span></summary>
        <div><br />
          <b>Altitude mode logic functionality testing using MTCD (text-based) (Test001)</b><br />
          <span class="passed"></span><a href="...">1: TS_AP_ML_001 - Glideslope armed 01</a>: Ok.<br />
          <span class="passed"></span><a href="...">2: TS_AP_ML_002 - Glideslope armed 02</a>: Ok.<br />
          ...
        </div>
      </details><br />
    </td>
  </tr>
  <tr>...
</table>

MQC iterates over each row of the table and counts the number of the same results.

  • For Assessment\Result the content of the first column is checked, e.g. <span class="failed">Failed</span>.

    The following numbers are collected:

    • Assessment\Result.Passed

    • Assessment\Result.Review

    • Assessment\Result.Failed

    • Assessment\Result.Error

    • Assessment\Result.Untriggered

    • Assessment\Result.Inactive

    • Assessment\Result.Unknown

    • Assessment\Result.None

    • Assessment\Result.Total

  • For Assessment\Work Status the content of the first column is checked, e.g. <b>Work Status:</b>reviewed<br />.

    The following numbers are collected:

    • Assessment\Work Status.Undefined

    • Assessment\Work Status.Created

    • Assessment\Work Status.Reqchanged

    • Assessment\Work Status.Described

    • Assessment\Work Status.Inwork

    • Assessment\Work Status.Rejected

    • Assessment\Work Status.Completed

    • Assessment\Work Status.Reviewed

    • Assessment\Work Status.Total

  • For Assessment\Requirement Coverage, MQC checks, if the second column contains any requirements for an assessment.

    The following numbers are collected:

    • Assessment\Requirement Coverage.Covered (number of assessments with linked requirements)

    • Assessment\Requirement Coverage.Uncovered (number of assessments not linked to any requirement)

    • Assessment\Requirement Coverage.Total (number of all listed assessments)

4.6.1.3.3. Requirement Catalog

MQC extracts all requirement information from the forth table of each test object section.

<h2>1.1 Test Object: Autopilot_Mode_Logic</h2>
<table width="100%" class="contentCheck">...</table>
<table width="100%" class="coverage">...</table>
<table width="100%" class="coverage">...</table>
<table width="100%" border="1" cellpadding="5">
  <tr><th>Requirement</th><th>Linked Items</th><th>Requirement Description</th><th>Additional Document Fields</th></tr>
  <tr>
    <td>
      <a id="Mod_autopilot_demo_ec_Autopilot_Mode_Logic_AP_ML_req_01"></a>
      <b>AP_ML_req_01:</b>
      <span class="failed">Failed</span> <br />
      <em class="normal">(Autopilot_Mode_Logic)<br /> </em><br />
      <b>Testability</b><br /><b>Status:</b> Yes<br />
      <b>Review Status:</b> reviewed (derived)<br />
    </td>
    <td>
      <b>Assessments</b><br />
      <a class="normal" href="...">assmnt_AP_ML_req_01</a><br />
      <a class="normal" href="...">mars_AP_ML_req_01</a><br /><br />
      <b>Test Sequences</b><br />
      <a class="normal" href="...">01: TS_AP_ML_001</a><br />
      <a class="normal" href="...">02: TS_AP_ML_002</a><br />
      ...
    </td>
    <td>Glide-slope control is armed when its Glide Slope Enable and Glide Slope Signal are both true.</td>
    <td>
      <b>Requirement Type</b><br />functional requirement<br /><br />
      <b>Requirement Status</b><br />reviewed<br /><br />
      <b>Original Testability</b><br />Test<br /><br />
    </td>
  </tr>
  <tr>...
</table>

MQC iterates over each row of the table and counts the number of the same results.

  • For Requirement\Result the content of the first column is checked, e.g. <span class="failed">Failed</span>.

    The following numbers are collected:

    • Testcase\Result.Passed

    • Testcase\Result.Review

    • Testcase\Result.Failed

    • Testcase\Result.Error

    • Testcase\Result.Untriggered

    • Testcase\Result.Inactive

    • Testcase\Result.Unknown

    • Testcase\Result.None

    • Testcase\Result.Total

  • For Requirement\Assessment Coverage, MQC checks, if the second column contains any assessments for a requirement.

    The following numbers are collected:

    • Requirement\Assessment Coverage.Covered (number of requirements with linked assessments)

    • Requirement\Assessment Coverage.Uncovered (number of requirements not linked to any assessment)

    • Requirement\Assessment Coverage.Total (number of all listed requirements)

  • For Requirement\Testcase Coverage, MQC checks, if the second column contains any test sequences for a requirement.

    The following numbers are collected:

    • Requirement\Testcase Coverage.Covered (number of requirements with linked testcases)

    • Requirement\Testcase Coverage.Uncovered (number of requirements not linked to any testcase)

    • Requirement\Testcase Coverage.Total (number of all listed requirements)

  • For Requirement\Testability the content of the first column is checked, e.g. <b>Testability</b><br /><b>Status:</b> Yes<br />.

    The following numbers are collected:

    • Requirement\Testability.Testable (number of requirements with testability status Yes)

    • Requirement\Testability.Untestable (number of requirements with testability status No)

    • Requirement\Testability.Total (number of all listed requirements)

4.6.1.4. PikeTec TPT

MQC supports two types of TPT report formats:

4.6.1.4.1. XML

The following TPT XML report example shows, where the TPT XML file reader extracts the expected measures from:

<Header ExecutionConfig="Lights Control MATLAB _SiL" ExecutionDate="14:47:58 10.05.2016"
        TptFileName="D:\requirements.tpt" TptVersion="8u2">
  <Property Name="Author" Value="AuthorName">
  <Property Name="Model Under Test" Value="D:\matlab-platform\lights_control_simulink.mdl"/>
  <Property Name="System Under Test" Value="lights_control_simulink/lights_control"/>
  <Property Name="Platform Config" Value="SIL">
  <Platform History="100" Name="MATLAB-Platform" Stepsize="10000" Timeout="60000000">
    <Property Name="MATLAB Version" Value="MATLAB 8.4"/>
  </Platform>
</Header>

MQC extracts from the <Header.. > element

  • ExecutionDateTime (stored in MQC as ReportDateTime)

  • SystemUnderTest Value (stored in MQC as ArtifactPath), in this case “lights_control_simulink/lights_control”

The TPT XML file reader only stores the name of the subsystem instead of the complete path.

The Measurement as additional structuring element (see Quality Computation), is used here to distinguish between test results (same base measure and variable name) for different contexts, e.g. MiL or SiL.

The measurement name is either expected as Name attribute of the <Platform> element or read from the ExecutionConfig attribute of the <Header> element. In both cases, MQC expects the following syntax: _SIL_ or _MIL_.

Alternatively, MQC allows you to define a file reader option to extract the measurement name from another part of an XML report. This will then overwrite the implemented functionality.

The Measurement can be read from a specific attribute of an XML element.

Listing 4.19 File reader option to extract a ‘MeasurementName’ from the ‘ExecutionConfig’ attribute of the Header element in the TestSummary element of the xml.
AdapterOptions:
- $type: MES.MQC.DataSourceLibrary.Adapters.Files.TptXmlFileReader+FileReaderOptions
  MeasurementNameFromXml:
    - Path: TestSummary.Header
      Attribute: ExecutionConfig
      Regex: ^.*_(.+)$
      Result: $1

If a report contains multiple elements with the same tag, they can be filtered by matching an attribute value by regex.

Listing 4.20 File reader option to extract a ‘MeasurementName’ from the ‘Value’ attribute of the Property element in the Header element in the TestSummary element of the xml. This extraction uses only the Property element where the Name attribute has the value ‘Platform Config.’
AdapterOptions:
- $type: MES.MQC.DataSourceLibrary.Adapters.Files.TptXmlFileReader+FileReaderOptions
  MeasurementNameFromXml:
    - Path: TestSummary.Header.Property
      Matches:
        - Attribute: Name
          Regex: Platform Config
      Attribute: Value
      Regex: ^([M|S]{1}IL)$
      Result: $1

Additionally, it is possible to extract ArtifactPath, MeasurementName and/or ReportDateTime from the filepath of a report by defining a File Reader Option, see File Reader Options.

Testcases

...
<Summary AssessmentDuration="2.518" ExecutionDuration="2.078">
  <ExecutionSummary Errors="0" Failed="6" Inconclusive="0" Succeeded="5" Tests="11"/>
</Summary>
...
<Testcases Number="26">
  <Group Name="...">
    <Group Name="...">
      <Testcase Description="" FullName="testcase 1" Id="8296" Name="testcase 1" Requirements="SPEC-4 SPEC-15" Status="Stable" Version="2"/>
      <Testcase Description="" FullName="testcase 2" Id="8352" Name="testcase 2" Requirements="SPEC-16 SPEC-4 SPEC-9" Status="Stable" Version="2"/>
      <Testcase Description="" FullName="testcase 3" Id="8541" Name="testcase 3" Requirements="SPEC-13" Status="Stable" Version="2"/>
    </Group>
    <Group Name="...">
      <Group Name="...">
        <Testcase Description="" FullName="testcase 4" Id="8310" Name="testcase 4" Requirements="SPEC-17 SPEC-18 SPEC-9" Status="Stable" Version="2"/>
        <Testcase Description="" FullName="testcase 5" Id="8432" Name="testcase 5" Requirements="SPEC-17 SPEC-18 SPEC-8" Status="Stable" Version="2"/>
        ...
    </Group>
  </Group>
  ...
</Group>
...
</Testcases>

The following data is extracted:

  • testcase results are read from the categories of <ExecutionSummary> element

    • Tests as TestCase\Result.Total

    • Succeeded as TestCase\Result.Passed

    • Failed as TestCase\Result.Failed

    • Errors as TestCase\Result.Error

    • Inconclusive as TestCase\Result.Inconclusive

  • from all <Testcase.. > elements MQC calculates the sums of the respective Status attribute values as

    • Testcase\Work Status.Stable

    • Testcase\Work Status.Reviewed

    • Testcase\Work Status.To Be Reviewed

    • Testcase\Work Status.In Progress

    • Testcase\Work Status.Modified

    • Testcase\Work Status.New

    • Testcase\Work Status.Total

  • additionally, MQC counts all <Testcase.. > elements where the Requirements attribute has values and stores it as

    • Testcase\Requirement Coverage.Covered (number of testcases with requirements)

    • Testcase\Requirement Coverage.Uncovered (number of testcases without requirements)

    • Testcase\Requirement Coverage.Total (number of all testcases)

For each <Testcase.. > element, MQC reads the following data details.

  • Testcase Result details:

    • Testcase as Subject

    • TPT/{MeasurementName}/Testcase/{TestcaseGroupName}/{TestcaseName} as Subject Path

      (whereas {TestcaseGroupName} is a concatenated string of all parent Group elements a testcase is assigned to, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • from the TestcaseInformation element with the attribute Testcase="{id}" extract the value of the Result attribute as State

  • Testcase Requirement Coverage details:

    • Testcase as Subject

    • TPT/{MeasurementName}/Testcase/{TestcaseGroupName}/{TestcaseName} as Subject Path

      (whereas {TestcaseGroupName} is a concatenated string of all parent Group elements a testcase is assigned to, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Requirement Coverage as Measure

    • Covered (if any requirement is listed in the Requirements attribute), otherwise Uncovered as State

  • Testcase Work Status details (if there is a Status attribute):

    • Testcase as Subject

    • TPT/{MeasurementName}/Testcase/{TestcaseGroupName}/{TestcaseName} as Subject Path

      (whereas {TestcaseGroupName} is a concatenated string of all parent Group elements a testcase is assigned to, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Work Status as Measure

    • New | In Progress | Modified | To Be Reviewed | Reviewed | Stable as State

      (based on the value of the Status attribute)

Structural Coverages

Structural coverage information is read from the <Header><Platform><CoverageData> elements.

<Header>
  <Property Name="System under Test" Value=""/>
  <Property Name="Revision" Value=""/>
  <Platform History="100" Name="C Platform" Stepsize="10000" Timeout="60000000">
    <Property Name="Platform Mapping" Value="&lt;none&gt;"/>
    <CoverageData Coverage="0.29" CoverageType="Decision" ToolName="CTC++" ToolVersion=""/>
    <CoverageData Coverage="0.16" CoverageType="Condition" ToolName="CTC++" ToolVersion=""/>
    <CoverageData Coverage="0.21" CoverageType="MC/DC" ToolName="CTC++" ToolVersion=""/>
  </Platform>
</Header>
  • depending on the measurement, the coverages provided by the report are read as:

    • Model Coverage in case of MiL, e.g. Model Coverage\{CoverageType}

    • Code Coverage in case of SiL, e.g. Code Coverage\{CoverageType}

  • the following coverage types are read by MQC:

    • Model Coverage\Condition

    • Model Coverage\Decision

    • Model Coverage\MC/DC

    • Code Coverage\Condition

    • Code Coverage\Decision

    • Code Coverage\MC/DC

  • the following measures are collected per coverage type:

    • Total (always assumed to be 100)

    • Covered (value of the Coverage attribute multiplied by 100)

    • Uncovered (Total - Covered)

Requirements

The file reader prioritizes obtaining comprehensive data for requirements from the HTML report. This ensures that the file reader can access the most complete information available for requirements, enhancing the accuracy and reliability of the generated reports.

It first attempts to locate the overview.html file and extracts the requirement data from there (described in Requirements). If the overview.html file is not found, the file reader retrieves the data from the XML report instead.

<Requirements>
  <Requirement Id="SPEC-12" Text="Functional requirements"/>
  <Requirement Id="SPEC-13" State="SUCCESS" Text="If light_switch is ON, then headlight shall immediately be ON."/>
</Requirements>
<Testcases Number="11">
  <Testcase Description="..." Requirements="SPEC-6 SPEC-13"/>
</Testcases>
<Assesslets Number="22">
  <Group Name="Assesslets">
    <Assesslet Id="1" Name="..." Requirements="SPEC-13"/>
  </Group>
</Assesslets>
...
<Summary AssessmentDuration="454.739" ExecutionDuration="3667.713">
  <ExecutionSummary Errors="0" Failed="0" Inconclusive="0" Succeeded="209" Tests="209"/>
  <RequirementsSummary Errors="0" Failed="0" Inconclusive="0" Not-Covered="12" Passed="291" Requirements="303"/>
</Summary>

If the xml report contains a <RequirementsSummary> element (since TPT version 18), the requirement information is read from there. Otherwise, MQC iterates over all <Requirements> elements and counts the corresponding State attribute values.

The following data is extracted:

  • Requirement\Result.Passed

  • Requirement\Result.Failed

  • Requirement\Result.Error

  • Requirement\Result.Inconclusive (including number of requirements with State == 'Not-Covered')

  • Requirement\Result.Total

To extract the number of requirements linked to testcases, MQC counts all requirements listed as value in the Requirements attribute in any <Testcase.. > element:

  • Requirement\Testcase Coverage.Covered (number of requirements linked to any testcase)

  • Requirement\Testcase Coverage.Uncovered (Total - Covered)

  • Requirement\Testcase Coverage.Total

To extract the number of requirements linked to assesslets, MQC counts all requirements listed as value in the Requirements attribute in any <Assesslet.. > element:

  • Requirement\Assesslet Coverage.Covered (number of requirements linked to any assesslet)

  • Requirement\Assesslet Coverage.Uncovered (Total - Covered)

  • Requirement\Assesslet Coverage.Total

For each <Requirement.. > element, MQC reads the following data details.

  • Requirement Result details:

    • Requirement as Subject

    • TPT/{MeasurementName}/Requirement/{RequirementId} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • the value of the State attribute as State, or Uncovered if the State attribute does not exist

  • Requirement Testcase Coverage details:

    • Requirement as Subject

    • TPT/{MeasurementName}/Requirement/{RequirementId} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Testcase Coverage as Measure

    • Covered as State (if the value of the Id attribute is listed in the Requirements attribute of any Testcase elements, otherwise Uncovered)

  • Requirement Assesslet Coverage details:

    • Requirement as Subject

    • TPT/{MeasurementName}/Requirement/{RequirementId} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Assesslet Coverage as Measure

    • Covered as State (if the value of the Id attribute is listed in the Requirements attribute of any Assesslet elements, otherwise Uncovered)

Assesslets

As for requirements, the file reader prioritizes obtaining comprehensive data for assesslets from the HTML report. It first attempts to locate the overview.html file and extracts the assesslet data from there (described in Assesslets). If the overview.html file is not found, the file reader retrieves the data from the XML report instead.

<Assesslets Number="22">
  <Group Name="...">
    <Assesslet Id="1" Name="..." Requirements="SPEC-13" Status="Stable" Version="2"/>
    <Assesslet Id="2" Name="..." Requirements="SPEC-6" Status="Stable" Version="2"/>
    ...
  </Group>
  <Group Name="...">
    <Group Name="...">
      <Assesslet Id="14" Name="..." Requirements="SPEC-16 SPEC-4 SPEC-9" Status="Stable" Version="2"/>
      ...
    </Group>
    <Group Name="...">
      <Assesslet Id="19" Name="..." Requirements="SPEC-17 SPEC-18 SPEC-9" Status="Stable" Version="1"/>
      <Assesslet Id="20" Name="..." Requirements="SPEC-17 SPEC-18 SPEC-8" Status="Stable" Version="2"/>
      ...
    </Group>
    ...
  </Group>
</Assesslets>
<Testresult>
  <TestcaseInformation AssessmentDuration="1.371" ExecutionDate="12:52:49 12.11.2020" ExecutionDuration="0.1" ReportFile="...\report.html" Result="PASSED" SimulationDuration="6.0" Testcase="184" TimeoutOccurred="false">
    <AssessletInformation AssessResult="PASSED" Assesslet="1"/>
    <AssessletInformation AssessResult="PASSED" Assesslet="2"/>
    <AssessletInformation AssessResult="INCONCLUSIVE" Assesslet="3"/>
    ...
  </TestcaseInformation>
  <TestcaseInformation AssessmentDuration="0.903" ExecutionDate="12:52:50 12.11.2020" ExecutionDuration="0.099" ReportFile="...\report.html" Result="PASSED" SimulationDuration="16.0" Testcase="77" TimeoutOccurred="false">
    <AssessletInformation AssessResult="PASSED" Assesslet="1"/>
    ...
  </TestcaseInformation>
  ...
</Testresult>

The following data is extracted:

  • from all <Assesslet..> elements MQC calculates the sums of the respective Status attribute values as

    • Assesslet\Work Status.Stable

    • Assesslet\Work Status.Reviewed

    • Assesslet\Work Status.To Be Reviewed

    • Assesslet\Work Status.In Progress

    • Assesslet\Work Status.Modified

    • Assesslet\Work Status.New

    • Assesslet\Work Status.Total

  • assesslet results are read from the AssessletResult attributes of the <TestResult><TestcaseInformation><AssessletInformation> elements

  • Assesslet\Result.Error

  • Assesslet\Result.Failed

  • Assesslet\Result.Inconclusive

  • Assesslet\Result.Passed

  • Assesslet\Result.Total

  • additionally, MQC counts all <Assesslet.. > elements where the Requirements attribute has values and stores it as

  • Assesslet\Requirement Coverage.Covered (number of assesslets with requirements)

  • Assesslet\Requirement Coverage.Uncovered (number of assesslets without requirements)

  • Assesslet\Requirement Coverage.Total (number of all assesslets)

For each <Assesslet.. > element, MQC reads the following data details.

  • Assesslet Result details:

    • Assesslet as Subject

    • TPT/{MeasurementName}/Assesslet/{AssessletGroupName}/{AssessletName}[ID="{Id}"] as Subject Path

      (whereas {AssessletGroupName} is a concatenated string of all parent Group elements an assesslet is assigned to, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • from the AssessletInformation element with the attribute Assesslet="{id}" extract the value of the AssessResult attribute as State

  • Assesslet Requirement Coverage details:

    • Assesslet as Subject

    • TPT/{MeasurementName}/Assesslet/{AssessletGroupName}/{AssessletName}[ID="{Id}"] as Subject Path

      (whereas {AssessletGroupName} is a concatenated string of all parent Group elements an assesslet is assigned to, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Requirement Coverage as Measure

    • Covered as State (if any requirement is listed in the Requirements attribute, otherwise Uncovered)

  • Assesslet Work Status details (if there is a Status attribute):

    • Assesslet as Subject

    • TPT/{MeasurementName}/Assesslet/{AssessletGroupName}/{AssessletName}[ID="{Id}"] as Subject Path

      (whereas {AssessletGroupName} is a concatenated string of all parent Group elements an assesslet is assigned to, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Work Status as Measure

    • New | In Progress | Modified | To Be Reviewed | Reviewed | Stable as State

      (based on the value of the Status attribute)

4.6.1.4.2. HTML

The TPT HTML file reader extracts the same measures as the TPT XML file reader. Data is mainly read from file overview.html.

<body>
  <div class="header">TPT Report: Overview</div>
  <div class="table-caption">Test Summary</div>
  <table>
    <tr><td><span>TPT File</span></td><td><span>Model01.tptz</span></td></tr>
    <tr><td><span>TPT Version</span></td><td><span>15u3</span></td></tr>
    <tr><td><span>Date</span></td><td><span>07-Dec-2020</span></td></tr>
    <tr><td><span>System under Test</span></td><td><span>Model01</span></td></tr>
    ...
  </table>
  ...
  <p class="timestamp">Creation Time: 16:53:28 23.09.2021, TPT Version: 16u2</p>
</body>

The ReportDateTime is fetched from the <p class="timestamp"> element at the end of the html body.

The System under Test from the first table of the report is read as artifact name. It may also be the case that System under Test is shorten to SUT. If neither the first nor the second can be found, the name of the artifact is taken from TPT File (file name without extension).

<table>
  <caption>Platform Information</caption>
  <tr><td><span>Platform Mapping</span></td><td><span>FromInterfaceImport</span></td></tr>
  <tr><td><span>MATLAB Version</span></td><td><span>MATLAB 9.3</span></td></tr>
  <tr><td><span>Variable 'TestRun'</span></td><td><span>SIL</span></td></tr>
  <tr><td><span>Variable 'iS_MIL'</span></td><td><span>0</span></td></tr>
</table>

The measurement name is expected as part of the file name. If it can’t be extracted from there, MQC checks if the report contains a table with the caption Platform Information and reads the platform name from the row with the inner text Variable 'TestRun'.

Additionally, it is possible to extract ArtifactPath, MeasurementName and/or ReportDateTime from the filepath of a report by defining a File Reader Option, see File Reader Options.

Testcases

<table class="table-requirement-overview-new">
  <caption>Test Case Summary</caption>
  <tr>
    <td><span>Test Case Name</span></td>
    <td><span>Result</span></td>
    <td><span>ID</span></td>
    <td><span>Execution</span></td>
    <td><span>Assessment</span></td>
    <td><span>Duration</span></td>
    <td><span>Revision Status</span></td>
    <td><span>Directory</span></td>
    <td><span>Comment</span></td>
  </tr>
  <tr>
    <td class="generated_120" colspan="8">
      <div class="icon16x16 icon-group"></div>
      <span class="scenariogroup-new">change switch</span>
    </td>
  </tr>
  <tr>
    <td>
      <div class="icon16x16 icon-testcase"></div>
      <span class="passed">OFF ON</span>
    </td>
    <td>
      <div class="icon16x16 icon-success"></div>
    </td>
    <td>
      <span>184</span>
    </td>
    <td>
      <span>12:52:49 12.11.2020</span>
    </td>
    <td>
      <span>12:52:49 12.11.2020</span>
    </td>
    <td>
      <span>1.471s</span>
    </td>
    <td>
      <span>Stable</span>
    </td>
    <td>
      <a href="FUSION_Platform\000_change_switch\000_OFF_ON" target="_blank">
        <span>FUSION_Platform\000_change_switch\000_OFF_ON</span>
      </a>
    </td>
    <td/>
  </tr>
  <tr>
    ...
</table>

Testcase data is mainly fetched from the table with the caption Test Case Summary contained in the file overview.html. The file reader iterates over all rows of the table and calculates the sum of the respective testcase results by types read from the Result column:

  • Testcase\Result.Error

  • Testcase\Result.Failed

  • Testcase\Result.Inconclusive

  • Testcase\Result.Passed

  • Testcase\Result.Total

Accordingly, the testcase work status is fetched from the Revision Status column:

  • Testcase\Work Status.Stable

  • Testcase\Work Status.Reviewed

  • Testcase\Work Status.To Be Reviewed

  • Testcase\Work Status.In Progress

  • Testcase\Work Status.Modified

  • Testcase\Work Status.New

  • Testcase\Work Status.Total

The requirement coverage, means, the number of testcases linked to requirements, is calculated by counting all testcases contained in the list of testcase results of at least one requirement (see Requirements):

  • Testcase\Requirement Coverage.Covered

  • Testcase\Requirement Coverage.Uncovered

  • Testcase\Requirement Coverage.Total

For each row of this table with all column values, MQC reads the following data details.

  • Testcase Result details:

    • Testcase as Subject

    • TPT/{MeasurementName}/Testcase/{TestcaseGroupName}/{TestCaseName} as Subject Path

      (whereas {TestcaseGroupName} is a concatenated string of the contents of the Test Case Name column of all upper rows with folder icon, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • value of Result column as State

  • Testcase Requirement Coverage details:

    • Testcase as Subject

    • TPT/{MeasurementName}/Testcase/{TestcaseGroupName}/{TestCaseName} as Subject Path

      (whereas {TestcaseGroupName} is a concatenated string of the contents of the Test Case Name column of all upper rows with folder icon, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Requirement Coverage as Measure

    • Covered as State (if the testcase is linked to any requirement, see Requirements, otherwise Uncovered)

  • Testcase Work Status details (if there is a Revision Status column):

    • Testcase as Subject

    • TPT/{MeasurementName}/Testcase/{TestcaseGroupName}/{TestCaseName} as Subject Path

      (whereas {TestcaseGroupName} is a concatenated string of the contents of the Test Case Name column of all upper rows with folder icon, separated by /)

    • ArtifactPath as Artifact Inner Path

    • Work Status as Measure

    • New | In Progress | Modified | To Be Reviewed | Reviewed | Stable as State

      (based on the value of the Revision Status column)

Structural Coverages

<table class="coverage-new">
  <caption>Coverage Information</caption>
  <tr><td><span>Type</span></td><td><span>Coverage [TER%]</span></td><td><span>Report</span></tr>
  <tr><td><span>Condition</span></td><td><span>16.0%</span></td><td/></tr>
  <tr><td><span>Decision</span></td><td><span>29.0%</span></td><td/></tr>
  <tr><td><span>Function</span></td><td><span>100.0%</span></td><td/></tr>
  <tr><td><span>MC/DC</span></td><td><span>21.0%</span></td><td/></tr>
  <tr><td><span>Multicondition</span></td><td><span>18.0%</span></td><td/></tr>
  <tr><td><span>Statement</span></td><td><span>29.0%</span></td><td/></tr>
</table>

Coverage data is fetched from the table with the caption Coverage Information. MQC reads the first column as measure name and the second column as measure value:

  • depending on the measurement, the coverages provided by the report are read as:

    • Model Coverage in case of MiL, e.g. Model Coverage\{Type}

    • Code Coverage in case of SiL, e.g. Code Coverage\{Type}

  • the following coverage types are read by MQC:

    • Model Coverage\Condition

    • Model Coverage\Decision

    • Model Coverage\MC/DC

    • Code Coverage\Condition

    • Code Coverage\Decision

    • Code Coverage\MC/DC

  • the following measures are collected per coverage type:

    • Total (always assumed to be 100)

    • Covered (value of the Coverage column)

    • Uncovered (Total - Covered)

Requirements

Requirement data is mainly fetched from the table where the caption contains Requirement Coverage Summary.

<table>
  <caption>Requirement Coverage Summary (only linked requirements)</caption>
  <tr>
    <td><span>Requirement Coverage</span></td>
    <td><span>Passed</span></td>
    <td><span>Failed</span></td>
    <td><span>Execution Error</span></td>
    <td><span>Inconclusive</span></td>
    <td><span>Not Covered</span></td></tr>
  <tr>
    <td>
     <span>10 out of 13 requirements are linked to test cases</span><br/>
     <span>6 out of 13 requirements are covered</span>
    </td>
    <td><span>6</span></td>
    <td><span>0</span></td>
    <td><span>0</span></td>
    <td><span>0</span></td>
    <td><span>7</span></td>
  </tr>
</table>

The following data is extracted:

  • Requirement\Result.Passed

  • Requirement\Result.Failed

  • Requirement\Result.Error

  • Requirement\Result.Inconclusive (including number of requirements with state Not Covered)

  • Requirement\Result.Total (sum of all result values)

To distinguish between requirements linked to test cases and those linked to assesslets, MQC additionally reads requirement.html. Here, MQC fetches the data from the two tables

  • Requirements Results

    This table contains all requirements linked to test cases. Additionally it contains requirements not linked at all, which are assumed to be not testable.

  • Requirements Assesslet Results

    This table contains all requirements linked to assesslets.

Reading both tables, MQC creates a list of requirements, each requirement containing a list of linked testcases, assesslets or even both.

This list of requirements is used to extract the following measures:

  • Requirement\Testcase Coverage.Covered (number of reqirements with at least one linked testcase)

  • Requirement\Testcase Coverage.Uncovered (total - covered)

  • Requirement\Testcase Coverage.Total (number of all testable requirements)

  • Requirement\Assesslet Coverage.Covered (number of reqirements with at least one linked assesslet)

  • Requirement\Assesslet Coverage.Uncovered (total - covered)

  • Requirement\Assesslet Coverage.Total (number of all testable requirements)

For each requirement from these two tables, MQC reads the following data details.

  • Requirement Result details (if the State column is Passed, Failed, Error or Inconclusive):

    • Requirement as Subject

    • TPT/{MeasurementName}/Requirement/{RequirementID} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • the value of the State column as State

  • Requirement Testcase Coverage details:

    • Requirement as Subject

    • TPT/{MeasurementName}/Requirement/{RequirementID} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Testcase Coverage as Measure

    • Covered as State (if any test case is listed in the Test Case Results column of the Requirements Results table, otherwise Uncovered)

  • Requirement Assesslet Coverage details:

    • Requirement as Subject

    • TPT/{MeasurementName}/Requirement/{RequirementID} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Assesslet Coverage as Measure

    • Covered as State (if any assesslet is listed in the Assesslet Results column of the Requirements Assesslet Results table, otherwise Uncovered)

Assesslets

<table class="table-requirement-overview-new">
  <caption>Assesslet Summary</caption>
  <tr>
    <td/>
    <td><span>Assesslet Name</span></td>
    <td><span>Revision Status</span></td>
    <td><span>Passed</span></td>
    <td><span>Failed</span></td>
    <td><span>Execution Error</span></td>
    <td><span>Inconclusive</span></td>
    <td><span>Total</span></td>
  </tr>
  <tr>
    <td class="image-cell">
      <div class="icon16x16 icon-success"></div>
    </td>
    <td class="passed">
      <span>headlight hold if switch AUTO hysterese bright [ID=7]</span>
    </td>
    <td>
      <span>Stable</span>
    </td>
    <td class="number-cell">
      <span>4</span>
    </td>
    <td class="number-cell"/>
    <td class="number-cell"/>
    <td class="number-cell">
      <span>7</span>
    </td>
    <td class="number-cell">
      <span>11</span>
    </td>
  </tr>
  ...
</table>

Assesslet data is mainly fetched from the table with the caption Assesslet Summary contained in the file overview.html. The file reader iterates over all rows of the table and calculates the sum of the respective assesslet results by taking the worst result per row (if the assesslet has been tested for multiple testcases):

  • Assesslet\Result.Error

  • Assesslet\Result.Failed

  • Assesslet\Result.Inconclusive

  • Assesslet\Result.Passed

  • Assesslet\Result.Total

Accordingly, the assesslet work status is fetched from the Revision Status column:

  • Assesslet\Work Status.Stable

  • Assesslet\Work Status.Reviewed

  • Assesslet\Work Status.To Be Reviewed

  • Assesslet\Work Status.In Progress

  • Assesslet\Work Status.Modified

  • Assesslet\Work Status.New

  • Assesslet\Work Status.Total

The requirement coverage, means, the number of assesslets linked to requirements, is calculated by counting all assesslets contained in the list of assesslet results of at least one requirement (see Requirements):

  • Assesslet\Requirement Coverage.Covered

  • Assesslet\Requirement Coverage.Uncovered

  • Assesslet\Requirement Coverage.Total

For each row of this table with all column values, MQC reads the following data details.

  • Assesslet Result details:

    • Assesslet as Subject

    • TPT/{MeasurementName}/Assesslet/{AssessletName} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • value of first column as State

  • Assesslet Requirement Coverage details:

    • Assesslet as Subject

    • TPT/{MeasurementName}/Assesslet/{AssessletName} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Requirement Coverage as Measure

    • Covered as State (if the assesslet is linked to any requirement, see Requirements), otherwise Uncovered)

  • Assesslet Work Status details (if there is a Revision Status column):

    • Assesslet as Subject

    • TPT/{MeasurementName}/Assesslet/{AssessletName} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Work Status as Measure

    • New | In Progress | Modified | To Be Reviewed | Reviewed | Stable as State

      (based on the value of the Revision Status column)

4.6.1.5. Razorcat Tessy

From the extract of the Tessy Example XML report,

<report success="notok" tessy_version="4.0.15" xml_version="3">
    <info date="2018-08-23" time="16:20:30+0200"/><tessyobject id="1024" level="0" name="Testsuite" success="notok" type="project">
        <testcase_statistics notexecuted="0" notok="11" ok="54" success="notok" total="65"/>
        <coverage>
           <c0 notreached="0" percentage="100" reached="142" success="ok" total="142"/>
           <c1 notreached="0" percentage="100" reached="113" success="ok" total="113"/>
           <mcdc notreached="0" percentage="100" reached="117" success="ok" total="117"/>
        </coverage>
        ...
     </tessyobject>
</report>

MQC reads

  • from the main (report) header:

    • tessy_version (stored in MQC as TessyReportVersion)

  • from the <info..> header:

    • date and time (stored in MQC as ReportDateTime)

  • from the <tessyobject..> header:

    • name (stored in MQC as ArtifactName)

    • type (stored in MQC as TessyObjectType)

  • from the <testcase_statistics..> header:

    • ok as Testcase\Result.Passed

    • notok as Testcase\Result.Failed

    • notexecuted as Testcase\Result.Unexecuted

    • sum of these three value as Testcase\Result.Total

  • from the <coverage> <c0..> header:

    • reached as Code Coverage\Statement.Covered

    • notreached as Code Coverage\Statement.Uncovered

    • total as Code Coverage\Statement.Total

  • from the <coverage> <c1..> header:

    • reached as Code Coverage\Branch.Covered

    • notreached as Code Coverage\Branch.Uncovered

    • total as Code Coverage\Branch.Total

  • from the <coverage> <mcdc..> header:

    • reached as Code Coverage\MC/DC.Covered

    • notreached as Code Coverage\MC/DC.Uncovered

    • total as Code Coverage\MC/DC.Total

4.6.1.6. MathWorks Polyspace

MQC supports two types of Polyspace report formats:

4.6.1.6.1. XML

If a Polyspace Xml report is created, MQC reads out from the xml-file:

  • ReportDateTime: from element PubDateTime

  • ArtifactPath: from element Subtitle

  • ArtifactName: same as ArtifactPath

The xml-file refers to several xfrag-files in the Polyspace-doc directory to be found on the same level as the xml-file.

From the image-000-chapter.xfrag-file, MQC extracts

  • BaseMeasureName: from the title elements of the tables

  • VariableName: first entry element of each table body row

  • MeasureValue: second entry element of each table body row

<table>
  <title>Coding Rules Summary - MISRA-C Checker</title>
  <tgroup>
    <tbody>
      <row><entry>Violations</entry><entry>52</entry></row>
      <row><entry>Pass/Fail</entry><entry>-</entry></row>
    </tbody>
  </tgroup>
</table>
<table>
  <title>Run-Time Checks Summary</title>
  <tgroup>
    <tbody>
      <row><entry>Number of Red Checks</entry><entry><emphasis role="red">2</emphasis></entry></row>
      <row><entry>Number of Gray Checks</entry><entry><emphasis role="gray">10</emphasis></entry></row>
      <row><entry>Number of Orange Checks</entry><entry><emphasis role="orange">13</emphasis></entry></row>
      <row><entry>Number of Green Checks</entry><entry><emphasis role="green">205</emphasis></entry></row>
      <row><entry>Proven</entry><entry>100.0%</entry></row>
      <row><entry>Pass/Fail</entry><entry>-</entry></row>
    </tbody>
  </tgroup>
</table>
<table>
  <title>Global Variable Summary</title>
  <tgroup>
    <tbody>
      <row><entry>Used non-shared variable</entry><entry>51</entry></row>
    </tbody>
  </tgroup>
</table>

The extracted data is then modified and transformed as follows:

  • If title = Run-Time Checks Summary

    • Number of Red Checks stored as Run-Time Checks.Major

    • Number of Gray Checks stored as Run-Time Checks.Minor

    • Number of Orange Checks stored as Run-Time Checks.Moderate

    • Number of Green Checks stored as Run-Time Checks.Good

    • Percentage of Proven stored as Run-Time Checks.Proven

    • Pass/Fail stored as Run-Time Checks.Pass_Fail

  • If title = Coding Rules Summary - MISRA-C Checker

    • Violations stored as MISRA-C Checker.Violations

  • If title = Global Variable Summary

    • Used non-shared variable stored as Global Variable.Used non-shared variable

    • Unused variable stored as Global Variable.Unused variable

4.6.1.6.2. Text

If a Polyspace tab-separated Text report is created, MQC reads from this txt-file:

  • ReportDateTime: time stamp of the last modification of the txt-file

  • ArtifactPath: extracted from File column (see below)

  • ArtifactName: same as ArtifactPath

  • BaseMeasureName: extracted from Family column

  • VariableName: extracted from Color, Information,``Check`` and Comment column based on the BaseMeasureName

  • MeasureValue: aggregated count per Color

To get the ArtifactPath, MQC extracts the file paths of all files used to create the report from the File column. MQC then obtains the common prefix from these file paths and takes the last directory from that prefix. This is stored as ArtifactPath.

For example if paths read from the “File” column are as follows:

C:\dev\Models\GlobalPosition\TLProj\TL_GlobalPosition\GlobalPosition.h
C:\dev\Models\GlobalPosition\TLSim\Rte_GlobalPosition.h
C:\dev\Models\GlobalPosition\TLSim\TL_GlobalPosition_fri.h

In this example the common prefix is C:\dev\Models\GlobalPosition and MQC extracts the last common directory GlobalPosition as the artifact path.

After reading, the extracted data is then modified and transformed as follows:

  • If Family equals Run-time Check:

    • Color = Red is stored as Run-Time Checks.Major

    • Color = Red and Comment is not empty is stored as Run-Time Checks.Major with Comments

    • Color = Gray is stored as Run-Time Checks.Minor

    • Color = Gray and Comment is not empty is stored as Run-Time Checks.Minor

    • Color = Orange is stored as Run-Time Checks.Moderate

    • Color = Orange and Comment is not empty is stored as Run-Time Checks.Moderate

    • Color = Green is stored as Run-Time Checks.Good

  • If Family equals Global Variable:

    • Check = Unused variable is stored as Global Variable.Unused variable

    • Color = Used non-shared variable is stored as Global Variable.Used non-shared variable

  • Family starts with MISRA C:

    • Information = Category:Mandatory is stored as MISRA-C Checker.Mandatory

    • Information = Category:Mandatory and Comment is not empty is stored as MISRA-C Checker.Mandatory with Comments

    • Information = Category:Required is stored as MISRA-C Checker.Required

    • Information = Category:Required and Comment is not empty is stored as MISRA-C Checker.Required with Comments

    • Information = Category:Advisory is stored as MISRA-C Checker.Advisory

    • Information = Category:Advisory and Comment is not empty is stored as MISRA-C Checker.Advisory with Comments

    • Sum of Category:Mandatory, Category:Required and Category:Advisory is stored as MISRA-C Checker.Violations

All the base measures with the post fix with Comments are always a subset of the root measure. For example, Run-Time Checks.Major is the count of all run-time checks which are ‘’red’’, with or without comments. Thus, Run-Time Checks.Major with Comments is always less or equal to the value of Run-time Check.Major.

4.6.1.7. BTC EmbeddedTester

The BTC EmbeddedTester adapter reads the data from HTML report files of this dynamic test tool. The adapter supports reading reports since version 3.x. Additionally, MQC supports report version 2.11 as well. Newer tool versions consolidate all Test Execution and Coverage data into a single HTML report. In contrast, version 2.11 generates separate HTML files for each measurement, as well as separate reports for Test Execution and Coverage. MQC treats the Test Execution Report or Back-to-Back Test Report file as the main report and retrieves the coverage data from a separate file, which should be located in the same directory.

MQC reads out:

<h4>Report Base</h4>
<table class="table dualTable" id="report_base_table">
   <tr>
       <th>Property</th>
       <th>Value</th>
   </tr>
   <tr>
       <td>Scope</td>
       <td>EV3Control_main</td>
   </tr>
   <tr>
       <td>Path</td>
       <td>EV3Control_demo_ec/EV3Control</td>
   </tr>
   ...
</table>
  • In version 3.x and newer, from the Report Base table:

    • The value of Scope property stored in MQC as ArtifactName

    • The value of Path property stored in MQC as ArtifactPath

<h2 id="reportMetaInformation" class="chapter has-toggle">Meta Information</h2>
<h4>Report Base</h4>
<div class="data_table_container">
    <table class="data_table">
        <tr><td>Scopes: ArtifactPath, ..., ... </td></tr>
    </table>
</div>
  • In version 2.11, from the table under section 1. Meta Information => Report Base:

    • The first value of Scope separated by a comma, stored in MQC as ArtifactName

<h4>General</h4>
<table class="table dualTable">
  <tr>
      <th>Property</th>
      <th>Value</th>
  </tr>
  ...
  <tr>
      <td>Profile Creation Date</td>
      <td>Jan 30, 2024, 11:56:47 AM</td>
  </tr>
  ...
</table>
  • In version 3.x and newer, from the General table:

    • The value of Profile Creation Date stored in MQC as ReportDateTime

<h2 id="reportMetaInformation" class="chapter has-toggle">Meta Information</h2>
<h4>General Information</h4>
<div class="data_table_container">
    <table class="data_table">
        <tr><td class="colA">Profile Creation Date</td><td class="colB">Mar 18, 2024, 3:34:31 PM</td></tr>
    </table>
</div>
  • In version 2.11, from the table under section 1. Meta Information => General Information:

    • The value of Profile Creation Date stored in MQC as ReportDateTime

<h2 id="Summary" class="chapter has-toggle">Summary</h2>
<div class="data_table_container">
    <table class="data_table">
        <tr><td >Execution Mode</td><td >TL MIL</td></tr>
    </table>
</div>
  • As previously mentioned, in version 2.11, each file contains data for a single measurement. If the file is a Back-to-Back Test Report, the measurement is B2B. For other Test Execution Reports, the measurement is read from the table under the Summary section.

    • The value of Execution Mode is stored in MQC as MeasurementName

<h4>Test Execution Results Overview</h4>
<table class="table">
    <tr>
      <th></th>
      <th>TestCase</th>
      <th>Length</th>
      <th>Requirement (report base)</th>
      <th>Scope</th>
      <th>Folder</th>
      <th>Result (TL MIL)</th>
      <th>Result (SIL)</th>
    </tr>
    <tr>
      <td style="text-align:center"><span class="icon-sdc" title="No problems during simulation"></span></td>
      <td>TC-REQ_SC_1_2</td>
      <td>5</td>
      <td><a class="link"> REQ_SC_1_2</a></td>
      <td>SDOCAR</td>
      <td>Default Test Cases</td>
      <td><div class="ok-glyph ok-color">Passed</div></td>
      <td><div class="ok-glyph ok-color">Passed</div></td>
    </tr>
</table>
<h2>Test Execution Results Overview</h2>
<table class="data_table">
    <tr>
      <th></th>
      <th>TestCase</th>
      <th>Length</th>
      <th>Requirement (report base)</th>
      <th>Scope</th>
      <th>Folder</th>
      <th>Result</th>
    </tr>
    <tr>
      <td style="text-align:center"><span class="icon-sdc" title="No problems during simulation"></span></td>
      <td><a class="link">TC-REQ_SC_2_1</a></td>
      <td>11</td>
      <td><a class="link">REQ_SC_2_1</a></td>
      <td>SDOCAR</td>
      <td>Default Test Cases</td>
      <td><div class="result_container"><div class="PASSED" /><a href="#26_testcase">Passed</a></div></td>
    </tr>
</table>
  • In version 2.11, the test case information is found in the Test Execution Report file. Independent from the version of the tool, this information is located in the Test Execution Results Overview table. However, since 3.x, distinct columns are allocated for test case results corresponding to various measurements.

    MQC generates these measures for each measurement based on the outcomes of the respective test cases:

    • Number of Passed test cases as Testcase\Result.Passed

    • Number of Failed test cases as Testcase\Result.Failed

    • Number of Error test cases as Testcase\Result.Error

    • Number of No Verdict test cases as Testcase\Result.No Verdict

    • Number of Outdated test cases as Testcase\Result.Outdated

    • Number of Missing Executions test cases as Testcase\Result.Missing Execution

    • Number of test cases as Testcase\Result.Total

    • The count of test cases containing item(s) in the Requirement column as Testcase\Requirement Coverage.Covered

    • The count of test cases with no item listed in the Requirement column as Testcase\Requirement Coverage.Uncovered

    • The count of test cases as Testcase\Requirement Coverage.Total

    For each row and measurement, MQC reads two data details.

    For the first Data Detail reads:

    • Testcase as Subject

    • EmbeddedTester/{MeasurementName}/Testcase/{Folder}/{TestCase} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • Passed | Failed | Error | No Verdict | Outdated | Missing Execution (based on the value in the result column) as State

    For the second Data Detail reads:

    • Testcase as Subject

    • EmbeddedTester/{MeasurementName}/Testcase/{Folder}/{TestCase} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Requirement Coverage as Measure

    • Covered (if any requirement is listed in the Requirement column), otherwise Uncovered as State

<h3>2. Requirements Traceability</h3>
<table class="table">
    <tr>
       <th>Requirement ID</th>
       <th>Description</th>
       <th>Test Cases</th>
       <th>Covered</th>
       <th>Passed (TL MIL)</th>
       <th>Passed (SIL)</th>
    </tr>
    <tr id="2yR_req">
       <td>REQ_SC_1_1</td>
       <td>We ramp down the ultrasound sensor signal value.</td>
       <td>1</td>
       <td><div class="ok-glyph ok-color">Covered</div></td>
       <td>100%</td>
       <td> 100%</td>
    </tr>
</table>
<h2>Requirements Traceability</h2>
<table class="data_table">
    <tr>
       <th>Requirement ID</th>
       <th>Description</th>
       <th>Test Cases</th>
       <th>Covered</th>
       <th>Passed</th>
    </tr>
    <tr id="2yV_req">
       <td>REQ_SC_2_1</td>
       <td>IStart with a zero input signal for usSensorFront. Drive revers by setting remoteCtrlCmd.</td>
       <td>1</td>
       <td class="cell_thin_center"><div class="result_container covered_center"><div class="NOT_COVERED cell_thin_center" /></div></td>
       <td>0%</td>
     </tr>
</table>
  • In version 2.11, the requirement information is located in the Test Execution Report file. Independent from the tool version, this information is found in the Requirements Traceability table. Since version 3.x, BTC EmbeddedTester reports include distinct columns that indicate the status of requirements passed for different measurements. The status is represented as 0% if it failed and 100% if it passed. MQC generates these measures for each measurement based on the results and coverage of the requirements:

    • Number of 100% requirements as Requirement\Result.Passed

    • Number of 0% requirements as Requirement\Result.Failed

    • Number of 100% or 0% requirements as Requirement\Result.Total

    • Number of requirements marked as Covered in the Covered column as Requirement\Testcase Coverage.Covered

    • Number of requirements marked as Not covered in the Covered column as Requirement\Testcase Coverage.Uncovered

    • Number of requirements as Requirement\Testcase Coverage.Total

    For each row and measurement, MQC reads two data details.

    For the first Data Detail reads :

    • Requirement as Subject

    • EmbeddedTester/{MeasurementName}/Requirement/{Requirement ID} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • Passed (if the Passed column is 100%) or Failed if it is 0% as State

    For the second Data Detail reads:

    • Requirement as Subject

    • EmbeddedTester/{MeasurementName}/Requirement/{Requirement ID} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Testcase Coverage as Measure

    • Covered (if the Covered column is Covered), otherwise Uncovered as State

<table>
  <tr>
    <th></th>
    <th>Properties</th>
    <th colspan="2">Handled</th>
    <th colspan="2">Covered</th>
    <th colspan="2">Unreachable (n/inf)</th>
    <th colspan="2">Justified</th>
  </tr>
  <tr>
    <td>Statement (STM)</td>
    <td>302</td>
    <td>90</td>
    <td>29.8%</td>
    <td>90</td>
    <td>29.8%</td>
    <td>0</td>
    <td>0.0%</td>
    <td>0</td>
    <td>0.0%</td>
  </tr>
</table>
  • In version 2.11, the reference file for coverage data is determined by the measurement read from the Test Execution Report file. If the measurement name is SiL (Software-in-Loop), the coverage data is read from the tables in the second section of the Code Analysis Report. Otherwise, for MiL (Model-in-Loop), the coverage data is read from the tables in the Coverage Statistics section of the Model Coverage Report. For version 3.x and newer, SiL coverage data is retrieved from all tables under the Code Analysis section. Version independently, the first column of these tables is interpreted as the measure name, while the subsequent columns are interpreted as variables:

    • Handled

    • Covered

    • Unreachable

    • Justified

<h2>Back-to-Back Testing</h2>
<div><h3>1. Back-to-Back Test</h3>
  <h3>Test Results Overview</h3>
  <h3 >TL MIL vs SIL</h3>
  <div>
     <h4>Comparison Results</h4>
     <table class="table">
         <tr>
           <th></th>
           <th>Name</th>
           <th>Length</th>
           <th>Result</th>
         </tr>
         <tr>
           <td>#1</td>
           <td>
               <span class="icon-wdc" title="Warnings during comparison"></span>:
               TL MIL_Stage1_TempValue <br>
               <span class="icon-sdc" title="No problems during comparison"></span>:
               SIL_Stage1_TempValue
           </td>
           <td> 1</td>
           <td><div class="ok-glyph ok-color"> Passed</div> </td>
         </tr>
     </table>
  </div>
</div>
  • For version 3.x and newer, Back-to-Back data is read from the Back-to-Back Testing table. If the file is a Back-to-Back Test Report, the data is retrieved from the table in the section following the Scopes Overview. MQC interprets these measures for the B2B measurement based on the values in the ‘Result’ column:

    • Number of Passed test cases as Testcase\Result.Passed

    • Number of Failed test cases as Testcase\Result.Failed

    • Number of test cases as Testcase\Result.Total

    For each row MQC reads data detail:

    • Testcase as Subject

    • EmbeddedTester/B2B/Testcase/{Name} as Subject Path

    • ArtifactPath as Artifact Inner Path

    • Result as Measure

    • Passed | Failed | Error | No Verdict (based on the value in the Result column) as State

4.6.1.8. Verifysoft Testwell CTC++

MQC supports two types of Verifysoft Testwell CTC++ report formats:

4.6.1.8.1. XML

MQC will read from the XML file:

<ctc_xml_report>
  <header_info>
    <ctcpost_version>8.0.1</ctcpost_version>
    <copyright>Copyright (c) 1993-2013 Testwell Oy</copyright>
    <copyright>Copyright (c) 2013-2016 Verifysoft Technology GmbH</copyright>
    <report_generated>Fri Jun 05 15:22:09 2020</report_generated>
  </header_info>
  <file name="D:\source\Application\DeviceLayer\test_object.cpp">
    <file_type>source</file_type>
    <instrumentation_mode>multicondition</instrumentation_mode>
    <instrumentation_timestamp>Fri Jun 05 15:21:42 2020</instrumentation_timestamp>
    <sym_rewrite_count>0</sym_rewrite_count>
    <sym_update_count>0</sym_update_count>
    <data_rewrite_count>0</data_rewrite_count>
    <data_update_count>0</data_update_count>
    <file_summary>
      <functions>77</functions>
      <lines>200</lines>
      <measurement_points>16</measurement_points>
      <ter>68</ter>
      <hits>343</hits>
      <all>505</all>
      <statement_ter>92</statement_ter>
      <statement_hits>5310</statement_hits>
      <statement_all>5752</statement_all>
      <statement_na_functions>0</statement_na_functions>
    </file_summary>
  </file>
</ctc_xml_report>

MQC extracts the following information, stores and transforms it to the MQC data structure:

  • from the <header_info> header:

    • <report_generated> (stored in MQC as ReportDateTime)

  • from each <file> header:

    • name (stored in MQC as ArtifactName)

    • from the <file_summary> header:

      • <lines> (stored in MQC as Code Coverage\{Source Line}.Total)

      • <measurement_points> (stored in MQC as Code Coverage\{Measurement Point}.Total)

      • when both <xxx_hits> and <xxx_all> exist

        • <xxx_hits> (stored in MQC as Code Coverage\{xxx}.Covered)

        • <xxx_all> (stored in MQC as Code Coverage\{xxx}.Total)

        • <xxx_all> - <xxx_hits> (stored in MQC as Code Coverage\{xxx}.Uncovered)

        • The capitalised form of xxx is used in MQC.

4.6.1.8.2. HTML

MQC will read from the HTML file:

<html>
  <head>
    <title>CTC++ Coverage Report - Files Summary</title>
  </head>
  <body>
  <table>
    <tr><td class="info">Symbol file(s)</td><td class="info">:</td><td class="info">
     c:\temp\Autopilot_ec\Test\Test_autopilot_demo_ec\Autopilot_Mode_Logic\Test001\TSeq001\CTCCov\Data_sil_ec\
     MON.sym (Thu May 21 05:35:10 2020)</td></tr>
    <tr><td class="info">Listing produced at</td><td class="info">:</td><td class="info">
     Thu May 21 05:44:18 2020</td></tr>
    <tr><td class="info">Coverage view</td><td class="info">:</td><td class="info">
     Reduced to decision coverage</td></tr>
    <tr><td class="info">Input listing</td><td class="info">:</td><td class="info">
     D:\temp\Autopilot_ec\Test\Test_autopilot_demo_ec\Autopilot_Mode_Logic\CTCCov\Data_sil_ec\profile.txt</td></tr>
    <tr><td class="info">HTML generated at</td><td class="info">:</td><td class="info">
     Thu May 21 05:44:18 2020</td></tr>
    <tr><td class="info">Structural threshold</td><td class="info">:</td><td class="infob">100 %</td></tr>
    <tr><td class="info">Statement threshold</td><td class="info">:</td><td class="infob">100 %</td></tr>
  </table><br>
  <table>
    <thead>
      <tr><th>TER %</th><th>-</th><th colspan="2">decision</th><th>TER %</th><th>-</th>
       <th colspan="2">statement</th><th>File</th></tr>
    </thead>
    <tbody>
      <tr><td class="dirb" colspan="9"><a name="a1"></a>Directory: C:\temp\Autopilot_ec\Test\
       Test_autopilot_demo_ec\Autopilot_Mode_Logic\Autopilot_Mode_Logic_sil_sil_ec_ert_rtw</td></tr>
      <tr><td class="below">98 %</td><td class="below">-</td><td class="below">(41/42)</td>
       <td width="115"><img border="1" src="red.gif" width="98" height="5" alt="">
        <img border="1" src="white.gif" width="2" height="5" alt="">
       </td><td class="below">99 %</td><td class="below">-</td><td class="below">(86/87)</td>
       <td width="115"><img border="1" src="red.gif" width="99" height="5" alt="">
        <img border="1" src="white.gif" width="1" height="5" alt="">
       </td><td><a href="indexD1.html" class="underline">Autopilot_Mode_Logic_sil_sil_ec.c</a></td></tr>
      <tr><td class="above">100 %</td><td class="above"></td><td class="above">(0/0)</td>
       <td width="115"><img border="1" src="blue.gif" width="100" height="5" alt=""></td>
       <td class="above">100 %</td><td class="above"></td><td class="above">(0/0)</td>
       <td width="115"><img border="1" src="blue.gif" width="100" height="5" alt=""></td>
       <td><a href="indexD2.html" class="underline">Autopilot_Mode_Logic_sil_sil_ec_data.c</a></td></tr>
      <tr><td class="belowb">98 %</td><td class="below">-</td><td class="below">(41/42)</td>
       <td width="115"><img border="1" src="red.gif" width="98" height="5" alt="">
        <img border="1" src="white.gif" width="2" height="5" alt=""></td>
       <td class="belowb">99 %</td><td class="below">-</td><td class="below">(86/87)</td>
       <td width="115"><img border="1" src="red.gif" width="99" height="5" alt="">
        <img border="1" src="white.gif" width="1" height="5" alt=""></td>
       <td class="dirb"><a title="C:\temp\R2018B\Autopilot_ec\Test\Test_autopilot_demo_ec\Autopilot_Mode_Logic\
        Autopilot_Mode_Logic_sil_sil_ec_ert_rtw">DIRECTORY OVERALL</a></td></tr>
      <tr><td class="ruler" colspan="9">&nbsp;</td></tr>
      <tr><td class="belowb">98 %</td><td class="below">-</td><td class="below">(41/42)</td><td width="115">
        <img border="1" src="red.gif" width="98" height="5" alt="">
        <img border="1" src="white.gif" width="2" height="5" alt=""></td>
       <td class="belowb">99 %</td><td class="below">-</td><td class="below">(86/87)</td>
       <td width="115"><img border="1" src="red.gif" width="99" height="5" alt="">
        <img border="1" src="white.gif" width="1" height="5" alt=""></td>
       <td><a href="indexO.html" class="underlineb">OVERALL</a></td></tr>
    </tbody>
  </table><br>
 </body>
</html>

MQC extracts the following information, stores and transforms it to the MQC data structure:

  • from the first <table> header:

    • read the last <td> of the <tr> with the first <td> equal to ‘Listing produced at’ (stored in MQC as ReportDateTime)

  • from the second <table> header:

    • from <thead> start from third <th> and every three <th> is capitalised and add to the Code Coverage\ as MeasureName.

    • from each <tr>:

      • read last <td> as <ArtifactName>

      • from <td> correspondent to <th> read values for that measure in such a way that the first number is fetch as Covered, second as Total and the difference between these two as Uncovered variable . (e.g for second row in sample report above, (41/42) is read as Code Coverage\Decision.Covered = 41 , Code Coverage\Decision.Total = 42 and Code Coverage\Decision.Uncovered = 1)

In reading rows from second table we will ignore <tr> with just one <td> or has class=”dirb” or class=”ruler” or when last <td> is equal to “DIRECTORY OVERALL” or “OVERALL”.

4.6.1.9. Danlawinc MxSuite

MQC reads data from the XML file named Report.RegResults.xml:

<MxVDevReportFile>
  <Report>
    <Date>20200423</Date>
    <Time>15:39</Time>
    <ProgramVersion>3.41.1.45984</ProgramVersion>
  </Report>
  <Regression>
    <Overview>
      <Project>
        <Name>ModelTestMiL</Name>
        <Description>
        </Description>
        <Folder>C:\ModelTest\ModelTestMiL</Folder>
        <FileName>ModelTestMiL.mxp</FileName>
        <ScenarioFolder>.\TestCases\</ScenarioFolder>
      </Project>
    </Overview>
    <StatisticsTotals Total="1" PercentTotal="100" Passed="0" PercentPassed="0" Failed="1" PercentFailed="100"
                      Skipped="0" PercentSkipped="0" RunTimeError="0" PercentRunTimeError="0"
                      Missing="0" PercentMissing="0">
      <Testcases Total="1" PercentTotal="100" Passed="0" PercentPassed="0" Failed="1" PercentFailed="100"
                 Skipped="0" PercentSkipped="0" RunTimeError="0" PercentRunTimeError="0"
                 Missing="0" PercentMissing="0" />
    </StatisticsTotals>
  </Regression>
</MxVDevReportFile>

MQC reads out

  • from the <Report>

    • <Date> and <Time> (stored in MQC as ReportDateTime)

  • from each <Regression> header:

    • <Overview><Project><Name> (stored in MQC as ArtifactName)

    • from <StatisticsTotals> header:

      • Total (stored in MQC as Scenarios\Result.Total)

      • Passed (stored in MQC as Scenarios\Result.Passed)

      • Failed (stored in MQC as Scenarios\Result.Failed)

      • Skipped (stored in MQC as Scenarios\Result.Unexecuted)

      • RunTimeError (stored in MQC as Scenarios\Result.Error)

      • Missing (stored in MQC as Scenarios.Missing)

    • from <Testcases> header:

      • Total (stored in MQC as Testcases\Result.Total)

      • Passed (stored in MQC as Testcases\Result.Passed)

      • Failed (stored in MQC as Testcases\Result.Failed)

      • Skipped (stored in MQC as Testcases\Result.Unexecuted)

      • RunTimeError (stored in MQC as Testcases\Result.Error)

      • Missing (stored in MQC as Testcases\Result.Missing)

4.6.1.12. Perforce Helix QAC

MQC supports two types of QA-System QAC report formats:

4.6.1.12.1. XML

The QAC XML file reader reads rules as Guidelines and rule violations as Findings. In this way the static analysis measures are named similarly for the different static analysis tools (e.g. MXAM, SL Check).

MQC will read from the XML file:

<AnalysisData timestamp="20200423T162554" projectpath="C:/Users/public/AppData/Local/samples/Examples"
              reportpath="C:/Users/public/AppData/Local/samples/Examples/configs/reports">
  <dataroot type="project">
    <tree type = "rules" >
      <RuleGroup name="xxx" total="5212" active="1112" >
        <Rule id = "" total="5" active="2" text="" >
          <Rule id = "" total="2" active="2" text="" >
            <Message guid = "" total="2" active="2" text="" />
          </Rule>
        </Rule>
        <Rule id = "" total="3" active="0" text="" >
          <Rule id = "" total="3" active="0" text="" >
            <Message guid = "" total="1" active="0" text="" />
            <Message guid = "" total="2" active="0" text="" />
          </Rule>
        </Rule>
      </RuleGroup>
    </tree>
  </dataroot>
  <dataroot type="per-file">
    <File path=".../example.h">
      <tree type="rules">
        <RuleGroup name = "xxx" total="84" active="53" >
          <Rule id="xxx" total="29" active="23"
                text="There shall be no occurrence of undefined or critical unspecified behaviour" >
            <Message guid = "qac-9.3.1-0602" total="6" active="0" text="" />
            <Message guid = "qac-9.3.1-0603" total="12" active="12" text="" />
            <Message guid = "qac-9.3.1-0836" total="1" active="1" text="" />
            <Message guid = "qac-9.3.1-0848" total="5" active="5" text="" />
            <Message guid = "qac-9.3.1-0854" total="5" active="5" text="" />
          </Rule>
        </RuleGroup>
      </tree>
      <tree type="levels">
        <Level guid = "QA_WARNING" total="55" active="30" name="Warnings" ></Level>
        <Level guid = "QA_ERROR" total="2" active="2" name="Errors" ></Level>
      </tree>
    </File>
  </dataroot>
</AnalysisData>

MQC extracts the following information, stores and transforms it to the MQC data structure:

  • from the <AnalysisData> header:

    • ReportDateTime: read from attribute timestamp

Each <File> element inside <dataroot type="per-file"> is treated as separate artifact. For each artifact Guidelines and Findings are read:

MQC treats different rule sets <RuleGroup name = "M3CM" ... as measurements (the name of the rule group M3CM is used as measurement name in MQC). All guidelines and findings are counted separately per measurement.

  • from each <File> header:

    • ArtifactPath: read from attribute path

  • per measurement (rule group):

    • Findings.Failed: the number of active rule violations (active > 0)

    • Findings.Suppressed: value of attribute total - active (number of suppressed rule violations)

    • Guidelines.Suppressed: the number of Rule elements with Message elements (lowest level) with attribute active = "0"

    • Guidelines.Failed: the number of all Rule elements (per artifact) with Message elements substracted by the number of suppressed guidelines (Guidelines.Suppressed).

  • from the <dataroot type="project"> <tree type = "rules" > element

    • Guidelines.Total: the number of Rule elements with Message elements (lowest level)

    • Guidelines.Passed: Guidelines.Total (for all artifacts) - Guidelines.Failed (per artifact)

The number of passed guidelines includes the number of suppressed guidelines (all rules without and with suppressed violations)!

4.6.1.12.2. HTML

MQC will read from the HTML file:

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <div id = "head" >
      <div class="stitle">
       Project &nbsp;&nbsp;: C:/Users/public/AppData/Local/Examples<br/>
       Status at: 23 Apr, 2019 at 16:25:13
      </div>
    </div>
    <div class="head">
      <div class="summary">
        <table border="0">
          <col border="50"/>
          <tr><td>Number of Files</td><td>185</td></tr>
          <tr><td>Lines of Code (source files only)</td><td>16431</td></tr>
          <tr><td>Total preprocessed code line</td><td>3036</td></tr>
          <tr><td>Diagnostic Count</td><td>948</td></tr>
          <tr><td>Rule Violation Count</td><td>1212</td></tr>
          <tr><td>Violated Rules</td><td>183</td></tr>
          <tr><td>Compliant Rules</td><td>7</td></tr>
          <tr><td>File Compliance Index</td><td>97.41%</td></tr>
          <tr><td>Project Compliance Index</td><td>6.68%</td></tr>
        </table>
      </div>
    </div>
   <div id="content">
     <div class="dpp">
        <div class="subsec"><h5>M3CM</h5></div>
        <div class="rgtable">
          <table border="1" >
            <tr><th>Files</th><th>Rule 0</th><th>Rule 1</th><td><b>Total Violations</b></td></tr>
            <tr>
             <td><a href="example.h" title="example.h">example.h</a></td>
             <th>2</th><th>3</th><td><b>5</b></td>
            </tr>
          </table>
        </div>
        <div class="subsec"><h5>QA-C</h5></div>
        <div class="rgtable">
          <table border="1" >
            <tr><th>Files</th><th>Rule 0</th><th>Rule 1</th><td><b>Total Violations</b></td></tr>
            <tr>
             <td><a href="example.h" title="example.h">example.h</a></td>
             <th>1</th><th>5</th><td><b>6</b></td>
            </tr>
          </table>
        </div>
    </div>
     <div class="worstrules">
        <div class="rgtable">
          <table border="1" >
            <tr><th>Files</th><th>Rule 0</th><th>Rule 1</th><th>Rule n</th></tr>
            <tr>
             <td><a href="example.h" title="example.h">example.h</a></td>
             <th>0</th><th>65</th><td>6</td>
            </tr>
          </table>
        </div>
        <div class="rgtable">
          <table border="1" >
            <tr><th>Files</th><th>Rule 0</th><th>Rule 1</th><th>Rule n</th></tr>
            <tr>
             <td><a href="example.h" title="example.h">example.h</a></td>
             <th>13</th><th>105</th><td>0</td>
            </tr>
          </table>
        </div>
     </div>
   </div>
  </body>
</html>

MQC extracts the following information, stores and transforms it to the MQC data structure:

  • from <div class="stitle"> that contains ‘Status at:’ read date and time and stored in MQC as ReportDateTime

  • from <div id="content"><div class="dpp">:

    • from <div class="subsec"> read <h5> as MeasurementName.

      MQC treats different rule sets as measurments (the name of the table is stored as measurement name in MQC). All guidelines and findings are counted separately per measurement.

    • from <div clas="rgtable"><table> for each <tr>:

      • ArtifactName: read from first <td> of <tr>

      • Findings.Failed: from last <td> of <tr>.

  • from <div id="content"><div class="worstrules"> header:

    • from <div class="subsec"> read <h5> as MeasurementName

    • from <div class="rgtable"><table> for each <tr>:

      • Guidelines.Failed: count each <td> of <tr> with violation (value > 0)

      • Guidelines.Passed: number of all guidelines for current rule - Guidelines.Failed

    • from <table> inside <div class="summary">:

      • Guidelines.Ovarall Failed: read second <td> of <tr> that first is equal to ‘Violated Rules’

      • Guidelines.Overall Passed: read second <td> of <tr> that first is equal to ‘Compliant Rules’

      • Guidelines.Overall Total: sum of Project Rules.Violated and Project Rules.Compliant

4.6.1.14. Rational Test RealTime (RTRT)

MQC supports Rational Test RealTime reports in html format.

MQC will read from index.htm file Generated on date as ReportDateTime.

<table WIDTH="100%" BORDER="2" >
   <tr><td>Name </td ><td>Status</td ><td>Failed</td ><td>Passed</td ><td>Total</td ></tr>
   <tr><td>ArtifactPath.xrd</td><td>Passed</td><td>0 </td ><td>28</td ><td>28</td ></tr>
</table>

MQC extracts the Testcase\Result measure information from index.html file in Reporter directory, stores and transforms it to the MQC data structure:

  • Name column stored in MQC as ArtifactPath

  • Failed column stored in MQC as Testcase\Result.Failed variable

  • Passed column stored in MQC as Testcase\Result.Passed variable

  • Total column stored in MQC as Testcase\Result.Total variable

Any extension (including ‘_’ and ‘numbers’) at the end of the artifact path will be removed, if there is at least one other row with the same base name, e.g. ‘Artifact’ and ‘Artifact1’ or ‘Artifact_1’ will all be reduced to ‘Artifact’. In that case the read measure values will be summed up per variable name.

<table>
 <tr>
  <td><b>Item</b></td>
  <td><b>Functions</b></td>
  <td><b>Functions and exits</b></td>
  <td><b>Statement blocks</b></td>
  <td><b>Decisions</b></td>
  <td><b>Basic conditions</b></td>
  <td><b>Modified conditions</b></td>
  <td><b>Multiple conditions</b></td>
 </tr>
 <tr>
  <td><B>ArtifactPath.C</B></td>
  <td> 10 / 10 </td>
  <td> 35 / 35 </td>
  <td> 35 / 36 </td>
  <td> 40 / 46 </td>
  <td> N/A </td>
  <td> N/A </td>
 </tr>
</table>

MQC extracts the coverage information from the last table of RateDoc.html file in Cvi directory, stores and transforms it to the MQC data structure:

  • Item column stored in MQC as ArtifactPath

  • all other columns are measures with the column header as base measure name (by capitalizing the words, making the last word singular and prefixing it with Code Coverage\):

    • read number before / stored in MQC as Covered variable.

    • read number after / stored in MQC as Total variable.

    • read the difference between these two numbers stored in MQC as Uncovered variable.

For the sample data extracted data is:

  • Code Coverage\Function.Covered=10 , Code Coverage\Function.Total=10 , Code Coverage\Function.Uncovered=0

  • Code Coverage\Functions And Exit.Covered=35 , Code Coverage\Functions And Exit.Total=35 , Code Coverage\Functions And Exit.Uncovered=0

  • Code Coverage\Statement Block.Covered=35 , Code Coverage\Statement Block.Total=36 , Code Coverage\Statement Block.Uncovered=1

  • Code Coverage\Decision.Covered=40 , Code Coverage\Decision.Total=46 , Code Coverage\Decision.Uncovered=6

MQC keeps the extension of the read artifact path (‘.h’ / ‘.c’), but removes the following ‘#’ and ‘numbers’ from end of the name, e.g. ‘Artifact.c’ and ‘Artifact.c #1’ will all be reduced to ‘Artifact.c’. In that case the read measure values will be summed up per variable name.

4.6.1.16. JetBrains dotCover

MQC supports JetBrains dotCover code coverage reports in XML format.

<FileIndices>
   <File Index="143" Name="C:\mqc\MES.MQC\MES.MQC.ConfigurationLibrary\Models\DisplayOptions.cs"
   ChecksumAlgorithm="SHA256" Checksum="E053E658FA2275AA60686980FC1640C5B6134A6F6FE4610494DB5DA184BF39C1" />
</FileIndices>
<Assembly Name="MES.MQC.ConfigurationLibrary" CoveredStatements="3452" TotalStatements="4616" CoveragePercent="75">
    <Namespace Name="MES.MQC.ConfigurationLibrary.Models" CoveredStatements="6" TotalStatements="6" CoveragePercent="100">
      <Type Name="DisplayOptions" CoveredStatements="5" TotalStatements="6" CoveragePercent="83">
        <Method Name=".ctor():System.Void" CoveredStatements="6" TotalStatements="6" CoveragePercent="100">
          <Statement FileIndex="143" Line="13" Column="13" EndLine="13" EndColumn="45" Covered="True" />
          <Statement FileIndex="143" Line="16" Column="83" EndLine="16" EndColumn="115" Covered="True" />
          <Statement FileIndex="143" Line="19" Column="77" EndLine="19" EndColumn="109" Covered="False" />
          <Statement FileIndex="143" Line="21" Column="86" EndLine="21" EndColumn="88" Covered="True" />
          <Statement FileIndex="143" Line="24" Column="75" EndLine="24" EndColumn="111" Covered="True" />
          <Statement FileIndex="143" Line="28" Column="15" EndLine="28" EndColumn="54" Covered="True" />
        </Method>
      </Type>
    </Namespace>
</Assembly>

The file reader provides two posibilities for reading dotCover reports, configurable via the DotCoverArtifactType file reader option. This configuration can be set in the same way as other general File Reader Options:

Listing 4.21 Example of file reader options for DotCover
 AdapterOptions:
 - $type: MES.MQC.DataSourceLibrary.Adapters.Files.DotCoverXmlFileReader+FileReaderOptions
   DotCoverArtifactType: Class
   DotCoverFindingType: Method
  • File-based reading (as default or when DotCoverArtifactType = File):

    All Statement elements with the same FileIndex are aggregated and coverage measure are extracted as follows:

    • Code Coverage\Statement.Covered: Count of <Statement Covered=”True”>

    • Code Coverage\Statement.Uncovered: Count of <Statement Covered=”False”>

    • Code Coverage\Statement.Total: Total number of <Statement> elements

    The ArtifactName is taken from the Name attribute of the corresponding File element with the matching index.

  • Class-based reading (DotCoverArtifactType = Class)

    From each Type element, read:

    • CoveredStatements attribute as Code Coverage\Statement.Covered

    • TotalStatements attribute as Code Coverage\Statement.Total

    • TotalStatements - CoveredStatements as Code Coverage\Statement.Uncovered

    The ArtifactName is constructed as: Name attribute of Namespace element.Name attribute of Type element, where Namespace is the parent element of the Type element.

MQC extracts Data Details at multiple granularity levels based on the DotCoverFindingType defined in the file reader options. The supported levels are:

  • Statement: Each Statement element is extracted as an individual Data Detail.

  • Method: For each Method element, MQC generates one or two Data Details by aggregating all Statement elements with the same Covered attribute value.

  • Class: For each Type element, MQC aggregates Statement elements by Covered attribute and generates one or two Data Details accordingly.

  • File: For each file, MQC similarly aggregates Statement elements that belong to the file and generates the corresponding Data Details.

At each level, MQC reads Data Detail:

  • Statement Coverage as Subject

  • CodeTest\Code Coverage\Statement Coverage as Subject Path

  • {ArtifactPath}\TypeName(s)\MethodName as Artifact Inner Path
    • TypeName(s) are included if DotCoverArtifactType = File

    • MethodName is included if DotCoverFindingType is Method or Statement

  • Code Coverage as Measure

  • Covered if Covered="True", otherwise Uncovered as State

4.6.1.17. JUnit

MQC supports JUnit test result reports in XML format.

<testsuites tests="593" failures="0" errors="0" skipped="7" time="697.472906" timestamp="2025-05-22 13:48:52Z">
  <testsuite tests="4" failures="0" errors="0" skipped="1" time="0.180289" timestamp="2025-05-22T13:49:31.9063443Z" name="TestSuiteName" file="TestObjectFilePath">
    <properties>
      <property name="testOf" value="TestObjectClassName" />
      <property name="category" value="TestCategory" />
    </properties>
    <testcase name="TestCaseName" classname="TestCaseClassName" assertions="9" time="0.090707">
      <properties>
        <property name="author" value="TestCaseAuthor" />
      </properties>
    </testcase>
    <testcase name="TestCaseName" classname="TestCaseClassName" file="TestObjectFilePath" assertions="9" time="0.089311">
      <properties>
        <property name="author" value="TestCaseAuthor" />
        <property name="description">
          Testcase description
        </property>
        <property name="severity" value="Major /">
      </properties>
    </testcase>
    <testcase name="TestCaseName" classname="TestCaseClassName" assertions="6" time="13.696713">
      <properties>
        <property name="author" value="TestCaseAuthor" />
      </properties>
      <system-out>Warning</system-out>
      <system-err>Detailed description of what went wrong</system-err>
    </testcase>
    <testcase name="TestCaseName" classname="TestCaseClassName" assertions="0" time="0.050186">
      <properties>
        <property name="author" value="TestCaseAuthor" />
      </properties>
      <skipped message="Obsolete because of ..." />
    </testcase>
  </testsuite>
  <testsuite tests="3" failures="2" errors="0" skipped="0" time="0.180289" timestamp="2025-05-22T13:49:32.9033468Z" name="TestSuiteName" file="TestObjectFilePath">
  ...
</testsuites>

In general, MQC takes the object of a testcase, i.e. an implementation class file, as artifact and extracts JUnit data by counting all testcase results related to this artifact.

Therefore, to extract the ArtifactPath, MQC tries to find the name of a test object in one of the following locations in the given order:

  • file attribute of a <testcase> element or

  • file attribute of the <testsuite> element, the testcase belongs to, or

  • value attribute of a <property name="testOf"> element in a <testcase> element or

  • value attribute of a <property name="testOf"> element in a <testsuite> element, the testcase belongs to

For each artifact/test object, MQC counts the results of all testcases related to that artifact. The MeasureName is set to Testcase\Result, whereas MQC expects the following result types, used then as VariableName:

  • Failed: the number of testcases containing a subelement <failure>

  • Warning: the number of testcases containing a subelement <system-out>Warning</system-out>

  • Error: the number of testcases containing a subelement <error>

  • Unexecuted: the number of testcases containing a subelement <skipped>

  • Passed: the number of testcases, which do not contain any of the previous subelements

  • Total: the number of all testcases related to a particular test object

The DateTime is extracted per artifact. MQC reads the timestamp attribute of the testsuite that contains testcases for a particular artifact. If there are multiple related testsuites, the execution timestamp of the first testsuite is taken.

MQC reads and imports every testcase as a finding (Data Details). The following information is extracted in particular:

  • SubjectType: Testcase

  • Measure: Result

  • DateTime: see date time extracted for data

  • DataSourceName: see data source name set for data

  • MeasurementName: CodeTest

  • ArtifactPath: test object of testcase, see artifact path extracted for data

  • SubjectName: name attribute of <testcase> element

  • SubjectPath: combination of MeasurementName, SubjectType and Measure

  • SubjectPathForArtifact: path to unit test file, means classname attribute of <testcase> element

  • Description: a combination of the testcase properties category, author, severity and description, if available

  • IssueMessage: value of message attribute of the corresponding testcase subelement, only relevant for not passed testcases with subelements like e.g. <failure>

  • State: the testcase result, e.g. Passed

4.6.1.18. Generic Data Sheet File Reader

MQC provides the possibility to read data and findings from any CSV or Excel data sheet. For that, the user has to define adapter options. Adapter options for the Generic Data Sheet file reader are rules to map for instance the columns of an Excel table to MQC data dimensions like revisions, artifacts, measures etc. Additionally, another option may specify how to filter rows.

The following code block shows an example of an adapter option to read MXAM data from an Excel sheet.

The “ImportDefinitions” define how to fetch the relevant data

  • FileExpression: name pattern of files that could be interpreted by this adapter option

  • ArtifactName: fetched from table column [SubComponents_Path]

  • Value: fetched from table column [Count]

An Excel sheet sample that could be read with this adapter option is shown in Figure 4.79

$schema: http://quality-commander.de/userguide/v83/schema/AdapterOptionSource.schema.json
$version: 1.0
Name: 'GenericDataSheet: MXAM Adapter Option'
AdapterOptions:
- $type: MES.MQC.DataSourceLibrary.Adapters.Files.GenericDataSheetFileReader+FileReaderOptions
  ImportDefinitions:
  - FileExpression: MXAMReport.*_[0-9]+\.xlsx
    ArtifactPath: '[SubComponents_Path]'
    ArtifactName: '[SubComponents_Path]'
    DataSource: MXAM
    MeasurementName: '[Chapters]'
    MeasureName: '[MeasureName]'
    VariableName: '[ResultType]'
    Value: '[Count]'
    ReportDateTime: MXAMReport.*_(\d{8}).xlsx
    ReportDateTimeFormat: yyyyMMdd
../../_images/MQC_DataSources_GenericDataSheet.png

Figure 4.79 Sample Excel sheet for reading MXAM data by Generic data sheet file reader

4.6.1.19. Manual data import (Excel Template)

MQC provides the possibility to import data from any other data source using the manual import option.

All measures imported into MQC have to be configured in the Quality Model (see Base Measures). Imported but not configured measures will be ignored in all visualizations as well as for the quality calculation.

The manual import has to be done using Excel. From the Data Locations dialog choose the Data Template button to create an Excel import file that can be used to load data into MQC.

../../_images/MQC_DataSources_ExportTemplateManualImport.png

Figure 4.80 Export the Data template for using manual data import functionality

Select one of the following options to customize the data import template according to your needs:

  • All: The Excel file already contains a column for each measure configured in the quality model as well as one row for each artifact imported respectively configured in the project structure.

  • Filtered: The Excel file only contains those measures and artifacts selected via the filter panel on the right-hand side of the pages.

  • Marked: The Excel file only contains those measures and artifacts that were marked by the user, e.g. if the user have selected an artifact structure and a specific data source in the Data Status page, the file only contains a row for the marked artifact as well as only columns for measures belonging to the marked data source.

../../_images/MQC_DataSources_ManualImport.png

Figure 4.81 Sample file for manual import of two Base Measures for two Artifacts for the same revision

The column Artifact consists of entries representing the objects for which data shall be collected and for which quality shall be computed (e.g. Simulink models, requirements documents, software components).

The entries of the column ReportDateTime are considered by MQC as the days when raising the data, which shall be collected. Those configured report dates are used to assign the imported data to MQC revisions.

Finally, all other columns are representing measures for which data shall be imported into MQC. As shown in Figure 4.81, measures (i.e. the column names) have to follow the syntax DataSource.Measurement.BaseMeasure.Variable.

  • DataSource: Specify where your data is coming from (e.g. the name of the tool which produced the data).

  • Measurement: Provide more structuring regarding the data (e.g. the reason why data is collected, the test environment etc.). This is optional and may be left empty. In this case the measure name syntax is DataSource.BaseMeasure.Variable. Nevertheless, it is recommended to fill out Measurement to achieve a high compliance to ISO 250xx.

  • BaseMeasure: Specifies a group of measures.

  • Variable: The name of the specific measure belonging to the base measure group.

The value for a specific measure for an artifact at a certain report date then has to be assigned to the corresponding cell within the Excel template.

In case you might want to assign default values for your imported base measures, you have to define them in compliance with this syntax in the quality model (see Base Measures).

4.6.2. File Reader Options

File Reader Options can be used to configure the way in which certain data is extracted from reports. It depends on the adapter whether such a configuration is supported and which options are possible.

In any case, a File Reader Option has to define the adapter it shall be applied to within the type parameter, as shown by the code snippet at the end of this section.

File Reader Options can be added via the adapter dialog, see Figure 4.76.

File Readers support the configuration of FilePath based Options for the following data fields:

  • ArtifactPath

  • DataSourceName

  • MeasurementName

  • ReportDateTime

The values for these fields can be extracted from the report file path with regex expressions.

In the adapter options, the data field has to be appended with a “FromFilePath”. (e.g. “ArtifactPathFromFilePath”) You can specify multiple definitions per data field. The definitions are checked in order and the first matching definition is used.

Each definition contains:

  • Regex

    A regular expression to match the file path.

  • Result

    The resulting value from the regex. Static values or dynamic regex results like $1 can be used.

  • IsFallback (optional, default is false)

    Should the definition be only used if there was no value read from the adapter itself.

  • Format

    Only relevant for ReportDateTime. DateTime-Format (e.g. “yyMMdd”)

Listing 4.22 Example of general adapter options defined for TptXmlAdapter for reading ‘ArtifactPath’, ‘MeasurementName’ and ‘ReportDateTime’ from file path
AdapterOptions:
- $type: MES.MQC.DataSourceLibrary.Adapters.Files.TptXmlFileReader+FileReaderOptions
  ArtifactPathFromFilePath:
    - Regex: ^.+\\(.+)_TPT[^\/]*\.xml$
      Result: $1
    - Regex: ^.+\\Model_([^\\]+).*$
      Result: $1
  MeasurementNameFromFilePath:
    - Regex: ^.+_(Report|Data)_([^\\]+).*$
      Result: $2
  ReportDateTimeFromFilePath:
    - Regex: ^.+\\([0-9]+)_(Report|Data)_.+$
      Format: yyMMdd
      Result: $1

4.6.3. API Connectors

The following APIs are supported by MQC with API connectors:

4.6.3.1. GitLab

MQC supports three types of GitLab API access:

The API connectors use GitLab’s v4 REST API. Authentication is done using personal, project, or group access tokens. For more information on configuring data locations with the GitLab API connectors, see GitLab API in Data Locations).

4.6.3.1.1. Repository

The GitLab Repository API Connector downloads files to be read with the File Readers, and does not create data directly. It provides similar functionality to the Git Connector. Instead of using the GIT protocol, it uses API requests to fetch commits and related files from the repository.

You can either have the API connector fetch only the changed files per commit, in which case the files are downloaded one at a time, or you can have it fetch all the files per commit. In both cases, the date, commit, and file filters allow you to limit the amount of files downloaded.

4.6.3.1.2. Pipeline (Artifacts)

The GitLab Pipeline API Connector can download files to be read with the File Readers (selecting “Artifacts” as load type in GitLab API).

It then fetches the artifact files from pipeline jobs executed on branches of projects (see Job artifacts).

This works as long as the artifacts have not expired. We recommend either disabling expiration, or permanently storing some of these files somewhere else, like a git repository or an artifactory, if you want a complete history of your project.

4.6.3.1.3. Pipeline (Test Reports)

The GitLab Pipeline API Connector can read data from the test report of pipelines directly, including the total coverage of pipeline jobs (selecting “Test Reports” as load type in GitLab API).

Test reports are part of the Unit Test Reports (junit) feature of gitlab (see Unit test reports).

For each project branch you can freely define the artifact name.

The API connector then fetches the test report and the total coverage of each pipeline job executed on branches of projects and creates data with the artifact name configured for the data location.

4.6.3.2. Jenkins

MQC supports two types of Jenkins API access:

The API connectors use Jenkins’ Remote Access API with the JSON flavor. Authentication is done using a username and a corresponding API token. For more information on configuring data locations with the Jenkins API connectors, see Jenkins API in Data Locations).

4.6.3.2.1. Pipeline (Artifacts)

The Jenkins Pipeline API Connector can download files to be read with the File Readers (select Load Type “Artifacts” in Jenkins API).

It then fetches the artifact files from builds executed in jobs (see Recording tests and artifacts).

This works as long as the artifacts have not expired. We recommend either disabling expiration, or permanently storing some of these files somewhere else, like a git repository or an artifactory, if you want a complete history of your project.

4.6.3.2.2. Pipeline (Test Reports)

The Jenkins Pipeline API Connector can read data from the test report of builds directly (selecting “Test Reports” as load type in Jenkins API).

Test reports are part of the Test Recording feature of jenkins (see Recording tests and artifacts).

For each job you can freely define the artifact name.

The API connector then fetches the test report and the total coverage of each build in a pipeline job and creates data with the artifact name configured for the data location.

4.6.3.3. Jira

MQC supports one types of Jira API access:

The API connectors use Jira’s v2 REST API.

Authentication is done using basic authentication with a user name and api token. (see Basic auth for REST APIs)

For more information on configuring data locations with the GitLab API connectors, see Jira API in Data Locations).

4.6.3.3.1. Issues

The Jira Issues API Connector creates data and findings from issues directly.

The API connector fetches the issues of a project and issue type based on the configured filters, if any. Issue versions are created for all relevant changelog entries of the issue. Relevant changelog entries are those where a mapped field is changed.

If any issue has a change in a date range of the configured import granularity (e.g. a specific day from 0:00 to 23:59 for Days), the corresponding version of each issue, if it existed at that point in time, is used to create findings for that date range. Based on the findings of the date range, data is created by counting the findings with the same ArtifactPath, MeasurementName, Measure, SubjectType and State.

4.6.3.4. JFrog Artifactory

The JFrog Artifactory API Connector downloads files from JFrog Artifactory repositories. It does not extract data from the reports directly. This is done by the File Readers.

Downloading repository files includes zip archives as well. In this case, the data is extracted directly from the zip archive without unpacking the zip file.

Authentication is done using identitiy tokens, see Identity and Access > Access Tokens.

Due to the fact that JFrog Artifactory repositories are rather archives than supporting version control, we recommend storing different versions of files in separate, descriptive folders, which allow to map the history of files to be archived in a comprehensible manner. This is essential to be able to show a trend in MQC. Otherwise, only the most recent sets of data could be extracted from the downloaded report files.

MQC allows to filter repository files to be downloaded (include/exclude) using regular expressions.

The background monitoring periodically checks for new repository files and notifies the user about a necessary data refresh (see client-side automatic data refresh in Keep the project up to date).

The monitoring can also check if files have been deleted from the configured repository. For this, enable the corresponding checkbox in the data location configuration. As a result, the data extracted from such report files will also be removed from the MQC project. Please note, the performance of the monitoring will be impacted by this.

4.6.4. Custom Adapters

Custom Adapters can be developed by yourself and added in this dialog. On import a C# class file gets compiled while an python script is executed and validated. If an error occurs, the import fails and the error messages are shown. A custom adapter source file (.cs or .py) can contain multiple custom adapters.

../../_images/MQC_ConfigAdapters_CompileValidation.png

Figure 4.82 Compiling error messages on a failed import of a custom adapter

After a custom adapter file is imported it will be saved together with the current MQC Project as a library item or a dxp file. If the source file is changed it has to be manually reloaded in this dialog so that it is compiled again and the adapter is updated in the MQC Project.

4.6.4.1. Developing a Custom Adapter

An MQC adapter has to inherit the MES.MQC.DataSourceLibrary.Adapters.FileReader or the MES.MQC.DataSourceLibrary.Adapters.ApiConnector class.

A custom adapter can be a .net C# class or a python class.

Visual Studio project to develop custom adapters for MQC:

https://github.com/Model-Engineering-Solutions/mqc-adapters