This comment form is no longer interactive because the comment period is closed.

2009-02 Real-time Monitoring and Analysis Capabilities | IRO-018-1 & TOP-010-1

Description:

Start Date: 09/24/2015
End Date: 11/09/2015

Associated Ballots:

Ballot Name Project Standard Pool Open Pool Close Voting Start Voting End
2009-02 Real-time Monitoring and Analysis Capabilities IRO-018-1 IN 1 ST 2009-02 Real-time Monitoring and Analysis Capabilities IRO-018-1 09/24/2015 10/23/2015 10/30/2015 11/09/2015
2009-02 Real-time Monitoring and Analysis Capabilities IRO-018-1 Non-binding Poll IN 1 NB 2009-02 Real-time Monitoring and Analysis Capabilities IRO-018-1 Non-binding Poll 09/24/2015 10/23/2015 10/30/2015 11/09/2015
2009-02 Real-time Monitoring and Analysis Capabilities TOP-010-1 IN 1 ST 2009-02 Real-time Monitoring and Analysis Capabilities TOP-010-1 09/24/2015 10/23/2015 10/30/2015 11/09/2015
2009-02 Real-time Monitoring and Analysis Capabilities TOP-010-1 Non-binding Poll IN 1 NB 2009-02 Real-time Monitoring and Analysis Capabilities TOP-010-1 Non-binding Poll 09/24/2015 10/23/2015 10/30/2015 11/09/2015

Filter:

Hot Answers

Southern believes that the criteria in R1.1 should be limited to the RC’s ability to monitor and assess the current/expected condition of its RC area within the capabilities of its monitoring tools not including the criteria listed in R1.1.1-R1.1.4.

Each RC has the inherent responsibility to protect the integrity of the system in its RC area and contribute to the overall integrity of Interconnection.   In order to fulfill this responsibility, the RC performs monitoring through the information collected from the modeled facilities in its RC area to accurately assess the state of the system and to perform real time assessments.  Throughout this process, the RC is constantly evaluating the quality of data received to ensure it has an accurate state of system conditions to perform real time assessments.  To impose a new standard focusing on data quality would only serve as administrative in nature and would not provide any substantial increases in reliability.

Southern Company, Segment(s) 1, 6, 3, 5, 4/13/2015

- 0 - 0

- 0 - 0

Other Answers

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

Jeff Wells, On Behalf of: Grand River Dam Authority, , Segments 1, 3

- 0 - 0

Thomas Foltz, AEP, 5, 11/2/2015

- 0 - 0

Our comments on the SAR posting essentially disagreed with the creation of this standard and the TOP-010 standard to mandate monitoring and analysis capability for the RC and TOP, which are the fundamental “bread and butter” capabilities that these entities must have to perform their assigned functions. We further suggested that the FERC directive could be met by an alternative but more appropriate means of incorporating the necessary requirements in the Organization Certification Requirements.

 

The SDT disagree with our proposal citing that: “…. these capabilities should be demonstrated at the organization certification stage, but believes they should also be maintained on an ongoing basis through adherence to standards.  Furthermore, development of standards is appropriate since, in general, organization certifications are based on the body of approved standards.”

 

We continue to respectfully disagree that “maintained on an ongoing basis through adherence to standards” is the only approach. Such maintenance can also be mandated through the certification process. For example, if basic monitoring capability is required for certification, there needs to be periodic assessment of whether or not such capability continues to exist at a level to be specified, or no lower than that assessed at the initial certification stage. To argue that the only way to ensure maintenance through adherence to standards, then a good part of the current certification requirements will have to become standards or whose quality or functional capability need to be ascertained through standards. This is not the case today, nor do we think this is the case in the future.

 

We once again urge the standard drafting team to consider the organization certification alternative as a means to address this FERC directive. For so long as the directive is met, it should not matter whether the requirements are incorporated into the certification requirements or in a new standard. Putting them into certification requirement is consistent with the intended use of organization certification process to ensure the responsible entities have the capability to fulfill their functional obligations; whereas putting them into reliability standards is inconsistent with the intended use of standards to drive the right planning and operation behaviors.

 

Notwithstanding the above disagreement with creating this and the TOP-010 standards, the currently posted draft standard appears to be micro-managing the requirements and process for providing adequate tool/capability.

 

The 5 requirements in the proposed IRO—018 standard essentially require that the RC:

 

·         Implement an Operating Process or Operating Procedure to address the quality of the Real-time data;

·         Indicate to the operating personnel the quality of the Real-time data

·         Implement an Operating Process or Operating Procedure to maintain the quality of any analysis used in its Real-time Assessments

·         Provide its System Operators with indication(s) of the quality of any analysis used in its Real-time Assessments

·         Utilize an independent alarm process monitor that provides notification(s) to its System Operators when a failure of its Real-time monitoring alarm processor has occurred.

 

These requirements mandate the “how”, not the “what”, and are overly prescriptive and micro-managing the daily business of the RC. If the SDT decides to keep using a standard to meet the FERC directive, then the standard needs only to be one requirement that mandates the RC having in place acceptable quality monitoring and analysis capability at all times (except the down time for repair but for which a backup needs to be in place) for the RC to perform its functions and meet all applicable reliability standard. This requirement will be the “what”, i.e., the necessary capability to perform the RC functions with specific reliability outcome - to ensure reliability. 

In brief, we are unable to support this standard for two main reasons: (a) that the standard is more suited for inclusion in the Organization Certification Requirements and (b) the standard as currently drafted is overly prescriptive and micro-managing.

Leonard Kula, Independent Electricity System Operator, 2, 11/2/2015

- 0 - 0

Tyson Archie, Platte River Power Authority, 5, 11/4/2015

- 0 - 0

We would like to see "quality" defined or clarrified. Also, we are not sure who is responsible for the quality of the data received from the interconnections. We also support some of the comments coming out of the MRO standards group.

Joe O'Brien, NiSource - Northern Indiana Public Service Co., 6, 11/4/2015

- 0 - 0

Amy Casuscelli, On Behalf of: Xcel Energy, Inc. - MRO, WECC, SPP RE - Segments 1, 3, 5, 6

- 0 - 0

The NSRF has concerns with redundancy and technical complications with the IRO-018-1 standard as proposed. The data quality objective can be simplified into a single requirement IRO-018-1 or TOP-001-3 / IRO-008 which is for entities to have tools or processes that consider data quality to reasonably assure a reasonably high confidence that the system is in a reliable state. Existing Energy Management  Systems (EMS) / Real-Time Contingency Analysis (RTCA) tools already have this capability.

Redundancy:

The NSRF recognizes that FERC directed the drafting team to address missing data quality issues based on the 2003 blackout report.  However, existing standards TOP-001-3 and TOP-003-3 already require effective monitoring and control which includes proper data.

As an example, R13 of TOP-001-3 sets clear requirements that a real-time assessment must be performed at least once every 30 minutes.

All TOPs assessment tools already consider bad data detection and identification from embedded software algorithms which are pre-requisites for successful execution of SE/RTCA. TOP’s engaged in monitoring the execution of their assessment tool(s) already address problems with data input quality and assessment quality.

Assessment tools must have robust data quality input and assessment capabilities to detect and identify problem(s) with any single piece of data (out of thousands of inputs) especially if that particular bad input (or limited set of bad input data) did NOT affect overall successful performance of the tool.

Technical Compliance Complications that Distort the Reliability Goal:

The zero defect nature of compliance, until fixed, drives unnecessary costly EMS / RTCA system upgrades without measureable system reliability improvements. The proposed TOP-010 and IRO-018 standards introduce vague and unclear formulations that will cause misunderstandings during compliance audits. Therefore, it is better to revise TOP-010 to a single requirement or revise TOP-001-3 or TOP-003-3 (and the corresponding IRO standards) with an additional simple requirement for entities to have tools or processes that  considers data quality to reasonably assure a high confidence that the system is in a reliable state.

Assessment tools use thousands of input data points including analog measurements and switching device statuses. Therefore, the reliability goal(s) are that the assessment tool has bad data detection and identification algorithms that allow the assessment tool to solve, and notify / log the system operator of bad data, and alarm if the bad data may compromise the assessment or solution.

Identifying vague input data issues such as “analog data not updated” or “data identified as suspect” is problematic from a compliance standpoint. Some Energy management Systems (EMS) simply do cannot identify all suspect data and therefore the zero defect compliance expectation to identify all suspect data or all bad analog data is technically infeasible. The reliability goal is a high confidence assessment that the system is in a reliable state. That is very different from the stated compliance zero defect standard as written to identify all “analog data not updated” or identify “all suspect data”.

Significant technical problems exist with the TOP-010 requirements when applied to input data received from other TOPs or RC’s (either directly or via ICCP). There are no technically feasible mechanisms to detect for “manually entered statuses. An example is detecting a manually entered “CLOSED” circuit breaker status whose actual status is “OPEN”, if such data was received via ICCP. 

TOP-010 R3 is unclearly defined as Transmission Operators would have major difficulty in coming up with a conclusion as to what is “the quality of data necessary to perform real-time assessment”. At any moment in time, any specific measurement (or subset of measurements) might either be lost or be detected as “bad”. That does not necessarily mean that the real-time assessment would be inaccurate or invalid. The tool’s accuracy can be measured by other inherent quantitative indicators such as algebraic sum of allocation errors” or “confidence percentile”. An aggregate reasonable confidence percentile measurement would be a sufficient system reliability objective reasonably proving the system was in a reliable state.

TOP-010 R5 introduces unclear terminology of “maintaining the quality of any analysis used in real-time assessment”.

MRO-NERC Standards Review Forum (NSRF), Segment(s) 3, 4, 5, 6, 1, 2, 9/9/2015

- 1 - 1

RC, BA & TOP entities currently have adequate tools for real-time monitoring and analysis.  The existing Standards adequately define what needs to be monitored by each entity without defining the tools.   Creating new requirements will not increase the reliability of the BES.

 

Additionally, some of the new proposed requirements (IRO-018-1 Req. 1, TOP-010-1 Req. 1) state:

Each RC/TOP/BA shall implement and Operating Process to address the quality of the Real-time data… the term quality is ambiguous and subjective.   This term needs to be defined.  Similar to Requirement 2, the terms indications of quality needs to be defined. If not defined, it could result in varying interpretations throughout the industry.

Lastly, the NERC Operating Reliability Subcommittee (ORS) has drafted a Reliability Guideline, “Loss of Real-Time Reliability Tools Capability / Loss of Equipment Significantly Affecting ICCP Data.” This guideline will help ensure that tools are adequate and if they are degraded for any reason, the potentially impacted entities are aware and can take action if needed.

PJM supports the comments submitted by the ISO/RTO Council Standards Review Committee.

- 5 - 0

R1.1 uses “but not limited to”.  That language is too open ended and cannot be audited or compliance limited. Compare it to R3.  ‘ But not Limited to” only belongs in a Measurement.

R2 is a ambiguous in whether a single data point of bad quality needs to be flagged or if the aggregate data is so bad that state estimator can’t solve.  Modern EMS systems incorporate data quality checks within their algorithms.  However, how this requirement is phrased will dramatically impact the compliance risk an organization faces.

 

Jonathan Appelbaum, 11/5/2015

- 0 - 0

Darnez Gresham, Berkshire Hathaway Energy - MidAmerican Energy Co., 3, 11/5/2015

- 0 - 0

Angela Gaines, On Behalf of: Portland General Electric Co., WECC, Segments 1, 3, 5, 6

- 0 - 0

R3 & R4: Duke Energy requests further clarification on the compliance aspects of R3 and R4. Operating studies use the latest information available, but that data changes continuously so the studies will never be 100% accurate. More information is necessary to know how to measure their quality effectively.

Duke Energy , Segment(s) 1, 5, 6, 4/10/2014

- 0 - 0

In part 1.1 of R1, if the bulleted list is intended to be an example list, then the examples should not be given part numbers but should be rolled up into the main sentence. If it is intended to be a minimum set of criteria, then “but not limited to” should be replaced with “at a minimum”.

FMPA, Segment(s) , 11/9/2015

- 0 - 0

Andrew Pusztai, 11/9/2015

- 0 - 0

Scott McGough, Georgia System Operations Corporation, 3, 11/9/2015

- 0 - 0

See comments for Q2.

John Brockhan, 11/9/2015

- 0 - 0

Oshani Pathirane, 11/9/2015

- 0 - 0

While Peak supports the spirit of this proposed Standard, Peak recommends there be a requirement for entities who provide data per IRO-010 to resolve data quality issues in a mutually agreeable time schedule. The RC could have a process, but if there is no requirement for entities to fix the issues the end result is not achieved. The Standard as written falls short of providing resolution. The same comments apply to TOP-010-1.   

Jared Shakespeare, 11/9/2015

- 0 - 0

PPL NERC Registered Affiliates, Segment(s) 1, 3, 5, 6, 9/11/2015

- 0 - 0

JEA, Segment(s) , 11/9/2015

- 0 - 0

We do not believe the issues addressed by the FERC directive rise to the level of requiring a reliability standard.  The intent of the directive and the resulting actions to be taken by the various entities would be better served by an official Guideline rather than a generic standard.  Forcing this into a Standard requires a one-size fits all approach that is leading to varied interpretations on “quality” and “adequacy” and may not enhance reliability of the BES.

We believe the requirements in general could be improved to be more results based.  As written, they largely will only result in identifying deficiencies after the fact when doing event analysis.  An entity may have a process or procedure as required, but they could miss a piece of data or fail to identify fully the impact a quality issue may have upon their situational awareness.  Simply having the process does not result in increased reliability.

Most entities already have a process in place to alarm or indicate data quality as needed to maintain reliability.  Entities are already required to operate reliably, within SOLs and IROLs, etc.  The creation of this standard as written would serve only to document that process and put it under auditable enforcement – with no discernible impact to maintaining reliability.  In order to make this standard truly results based, there needs to be some identification of the quality level, or data quality thresholds that must be maintained in order for reliability to be maintained.  Then that level (or quality of the data measurements) must be maintained per the standard. 

We suggest that there needs to be more direction given by the Standard in a few areas.  One is that the applicable entity should be determining a data range, time periods, number of manually entered values, etc. that can degrade analysis to the point reliability is threatened (R1.1.1-R1.1.4). 

We also find it problematic when an entity may not “own” the data and is simply receiving a quality flag from a sender.  The RC for example may not receive an accurate quality flag or the quality flag is corrupted in translation over ICCP.  Also, there is no requirement that the measurement devices even be of a particular accuracy.  For example the quality threshold may be more narrow than the accuracy of the device. 

The use of the term “suspect” in R2.1.4 in TOP-010-1 could lead to an interpretation that the operator “should have suspected” the data was incorrect.  The word “suspect” is used in some EMS packages as an identifier for garbage or data that is suspect.  We recommend the word be evaluated and replaced.

R3 is very problematic in that it infers there is a level of in-adequacy that studies must not fall below when requiring a level of “quality” to be maintained.  This seems to be an attempt to not use the word “adequate”.  Without defining the required level of quality, there is no way an entity can be compliant.  Any entity may experience some reduced level of quality, but may still have acceptable performance from their studies without taking action to correct or mitigate the data.  As written, the entity would be in violation for simply failing to “maintain” the level of quality.  Perhaps R3 could be written this way:

R3. Each Reliability Coordinator shall implement an Operating Process or Operating Procedure to maintain an acceptable level of quality of any analysis used in its Real-time Assessments. The Operating Process or Operating Procedure shall include: [Violation Risk Factor: Medium] [Time Horizon: Same Day Operations, Real-time Operations]

3.1. Criteria for determining the minimum quality of any analysis used in its Real-time Assessments; and

3.2. Actions to resolve unacceptable quality deficiencies in any analysis used in its Real-time Assessments.

R4 seems to be applicable to situations where a tool is used to perform the RTA.  This can become problematic when the assessment is simply an evaluation done by reviewing data and determining that no changes on the system have occurred such as could occur with a TOP who has only a few BES elements and does not possess an EMS or RTCA style "tool".

We suggest that altering the phrase “independent alarm process monitor” could be beneficial.  As stated, the phrasing seems to suggest particular processes or tools rather than the intent to just have an “independent process” to monitor the alarming system.  We suggest the change as:

R5. Each Reliability Coordinator shall utilize a process to independently monitor its Real-time monitoring alarm process monitor in order to provide notification(s) to its System Operators when a failure of its Real-time monitoring alarm processor has occurred.

SPP Standards Review Group, Segment(s) 1, 3, 5, 11/9/2015

- 0 - 0

The SRC fails to see the reliability risk that this project is intending to address.  The August 14 Blackout as well as the 2011 Southwest Outage have thoroughly and exhaustively investigated, reported upon, and the root causes mitigated appropriately.  Therefore, pointing to the need for this project based on mitigated, historical events falls short of identifying the reliability risk that this is intended to “fix.”  If, for example, WECC continues to have a vested interest in further mitigating the 2011 Southwest Outage though standard development, we suggest this project be migrated into a regional standard for WECC.  Lastly, the SRC believes that, absent a Standard specific for tools, a RC, TOP, or BA would, in fact, have violations of existing operational Requirements if they do not provide adequate monitoring and tools to their operators (i.e. other “things” would happen).

Further, the Requirements as written, “…to address the quality of the Real-time data necessary…” are ambiguous, lack consensus about how to measure, and do not rise to the level of a NERC Standard.

This proposed project appears to be well-suited for a guideline document as opposed to a Standard.  As written, the SAR appears to intend to write a “how” not “what” Standard (i.e., it does not appear to be a results-based standard).  The SRC believes that the existing Standards (i.e., IRO, TOP and BAL) sufficiently define what needs to be monitored by each entity without defining the tools (i.e., without defining the “how”), which is appropriate.  In the alternative, this could be considered a process to be used for Certifying new entities, in line with a methodology developed by the ERO and registered entities for assessing adequacy of tools for addressing the “quality” of real-time data, for assurance that RC, BA and TOPs have the ability to monitor appropriately in accordance with existing, performance-based Standards Requirements.

The SRC notes that the tools available to operators have progressed well beyond those available in 2003.  If defined tools would have been hardcoded in a standard at that time, it would have limited focus and investment to those things that were in the standard.  Further, expanding on our point above, the SRC believes that the “what” regarding tools is more appropriately captured in the certification expectations for BAs, RCs, and TOPs.  Additionally, it would be appropriate for Regions to evaluate tools as part of the Registered Entity’s Inherent Risk Assessment (IRA).  This would include the scope of tools, backups, etc. and would provide an adaptable approach that would encourage continuous improvement.  

Additionally, the SRC recommends that NERC coordinate with the NATF to encourage inclusion of an ongoing “care and feeding” of tools evaluation and information sharing in their efforts with the provision that they make information on good practices available to the wider NERC community so that non-members can learn from the innovation of others.

Finally, to avoid these issues in the future and to support communicating to FERC when a Standard is not needed and another tool is more suitable, the SRC suggests that future SARs be voted on by industry to determine whether they should proceed as a Standards project or another means is a more appropriate method through which to achieve the SAR’s objective.

Standards Review Committee (SRC), Segment(s) 2, 11/9/2015

- 0 - 0

Not applicable tp BPA

Andrea Jessup, On Behalf of: Bonneville Power Administration, WECC, Segments 1, 3, 5, 6

- 0 - 0

 See comments in item 2 below.

Jack Stamper, Clark Public Utilities, 3, 11/9/2015

- 0 - 0

Megan Wagner, 11/9/2015

- 0 - 0

By Part 1.1 stating “Criteria for evaluating potential Real-time data quality discrepancies…” implies that a contingency analysis has to be done.  Suggest removing “potential” from Part 1.1.

 

Language in R1.1 uses “but not limited to”. That language is too open ended and cannot be audited. Compare it to R3 use of “shall include”. “But not limited to” only belongs in a Measurement.

 

R2 is a bit ambiguous in whether a single data point of bad quality needs to be flagged or if the aggregate data is so bad that the state estimator can’t solve.

 

Suggest replacing the word “any” from R3 and R4 (relative to “any analysis”) and replacing with “reliability related” as “any” could be too broadly applied or interpreted.  Additionally, the term analysis is broad.  Standards related to Project 2014-03, approved through NERC as of this time, define such things as Real Time Assessments and Operational Planning Analysis.  It’s not exactly clear what analysis would be referring to.

Project 2009-02, Segment(s) 1, 0, 2, 3, 4, 5, 6, 7, 11/9/2015

- 0 - 0

Need more clarity in general; see Q2 for more specifics.

Glenn Pressler, 11/9/2015

- 0 - 0

(1) The language within Requirement R1 is vague and should not require criteria for evaluating data quality.  References to criteria for evaluating data quality should not be ambiguous and unenforceable.  The requirement needs to identify what real-time data is necessary to perform monitoring and assessments and consider if the data specifications maintained for reliability. The SDT should also clarify what is considered “quality” data and how an entity should identify data quality.  The minimum criteria is not specific and does not provide enough information to make an objective determination.

(2) Requirement R4 provides indications that the drafting team expects System Operators to receive quality data.  If an entity makes data available with a quality code, but the system fails to update the quality code, is this a violation?  The SDT also needs to identify the evidence required for this requirement and if a validation process is necessary.

(3) The language within Requirement R5 expects an entity to have redundant alarms or independent alarms for real-time monitoring.  What does “independent” mean in this context?  The drafting team provides technical examples such as “heartbeat” or “watchdog” monitoring systems in its rationale, but does the independent system need to be separate from the real-time monitoring?

ACES Standards Collaborators - Real-time Project, Segment(s) 1, 4, 5, 11/9/2015

- 0 - 0

Jennifer Losacco, On Behalf of: NextEra Energy - Florida Power and Light Co., FRCC, Segments 1

- 0 - 0

ReliabilityFirst offers the following comments for consideration:

 

  1. Requirement R2

    1. It is unclear as to what the phrase “indication(s) of the quality of the Real-time data” is referring to.  RF requests clarification on the term “indications” and what this involves.

       

    2. Also, since the System Operators work for the RC, it is unclear whom at the RC will be providing “indications” to the System Operators.  As written, the System Operators (working for the RC) could provide indications to themselves.  This does not seem to be the intent of the Requirement.

  2. Requirement R4

    1. Similar to Requirement R1, it is unclear as to what the phrase “indication(s) of the quality of any analysis” is referring to.  RF requests clarification on the term “indications” and what this involves.

       

    2. Also, since the System Operators work for the RC, it is unclear whom at the RC will be providing “indications” to the System Operators.  As written, the System Operators (working for the RC) could provide indications to themselves.  This does not seem to be the intent of the Requirement.

Anthony Jablonski, ReliabilityFirst , 10, 11/9/2015

- 0 - 0

Comments: ERCOT expresses its concern that the proposed standard is too prescriptive and goes beyond the associated FERC directive regarding a requirement addressing “capabilities.”  In particular, these standards were developed to address operator awareness of tool or other outages that could impact real-time monitoring.  Further, several of the requirements involve many more entities than the Reliability Coordinators and, absent a requirement for coordination, participation, and action in response to the Reliability Coordinator when an issue is identified, the proposed standard will not achieve its intended objective as written.  This is extremely challenging (R1.2) because the majority of issues related to poor data quality or invalid analysis tool solutions can only be resolved by parties outside of the Reliability Coordinator (e.g facility owners, telecom companies, etc.) Additionally, real-time data and monitoring capabilities are critical to the certification of a Reliability Coordinator and are not “dynamic.”  Because such “capabilities” are complex, require coordination and inputs from other entities, and are key to the continued performance of a Reliability Coordinator’s duties, they are not subject to change or revision often and, therefore, likely do not need continued monitoring and assessment.  Finally, several other reliability standards and associated requirements are contingent upon the availability of real-time tools and data, which standards and requirements are subject to the compliance monitoring and enforcement program.  Thus, ERCOT would recommend that requirements addressing capabilities be utilized during certification and not as a reliability standard subject to the compliance monitoring and enforcement program.

 

Should NERC continue this project, however, ERCOT recommends that they are narrowly focused on alerting and alarming operators when their tools and/or displays are no longer working or otherwise compromised during real-time operations.  Accordingly, ERCOT provides the following comments by requirement:

 

Requirements R1 and R2

 

ERCOT respectfully recommends that requirements R1 and R2 be combined.  Because the need to address data issues generally arises as a result of a data indicator or the need for manual data intervention by system operators, the value of a process to address such issues without the context of time or need is significantly diminished.  Hence, ERCOT proposes the following:

 

R1.  Each Reliability Coordinator shall provide its System Operators with indication(s) of the quality of the Real-time data necessary to perform its Real-time monitoring and Real-time Assessments. [Violation Risk Factor: Medium ] [Time Horizon: Real-time Operations]

R1.1 The Reliability Coordinator shall initiate actions to coordinate resolution of Real-time data quality discrepancies with the entity(ies) responsible for providing the data when failure or degradation is indicated.

 

Although this change does not accomplish the intended objective since the parties required to respond to the RC’s actions initiated to coordinate resolution do not have any requirements to respond or correct the issue, it does however limit the requirements to what the RC as an entity has control over.

 

Requirements R3 and R4

 

ERCOT respectfully recommends that requirements R3 and R4 be combined.  Because the need to address issues with real-time analyses generally arises as a result of an indicator that a particular analysis did not complete, is offline, or there is a need for manual intervention by system operators, the value of a process to address such issues without the context of time or need is significantly diminished.  Additionally, the availability of back up or offline processes for real-time analyses mitigates the risks associated with an issue or outage of analysis capabilities.  For R4, specifically “quality” is more ambiguous when considering analysis tools vs data quality.  Data quality is more discrete defined by predetermined limits for analog values and logic behind discrete/binary values.  Analysis “quality” is not an appropriate term as it infers a range rather than a discrete nature (valid/invalid).Hence, ERCOT proposes the following:

 

R3. Each Reliability Coordinator shall provide its System Operators with indication(s) of the tool(s) used in its Real-time monitoring and Real-time Assessments are functioning as intended. [Violation Risk Factor: Medium] [Time Horizon: Real-time Operations]

 

R3.1 The Reliability Coordinator shall initiate actions to resolve any issues internally and to coordinate resolution of any data issues that are impacting such tool(s) with entity(ies) responsible for providing data inputs to such tool(s) when failure or degradation is indicated.

 

ERCOT recommends that necessary revisions be made to the Violation Severity Levels to ensure consistency with the proposed revisions.

Elizabeth Axson, 11/9/2015

Unofficial_Comment_Form_2009-02__ercot_final.docx

- 0 - 0

- 0 - 0

Neutral position as it does not applies to ITC

Meghan Ferguson, 11/9/2015

- 0 - 0

Texas RE recommends making the retention period for R3 longer than 30 days.  This requirement consists of a procedure and the implementation of a procedure.  A 30 day retention policy will make it very difficult for a registered entity to demonstrate compliance.  The policy implies that there is not a reliability issue if compliance monitoring is not performed within 30 days (or every 30 days).  Is there an event analysis category that captures quality of data and assessments where the CEA may call for longer retention?  Effectively this retention policy is indicative of masking a reliability issue where the quality of the data used in assessments and the quality indication to the System Operators may be inadequate to perform the reliability functions and the only indication of a failure will occur during an event (or the preceding 30 days of a monitoring activity).

Texas RE suggests making IRO-018-1 R3, R4 clearer by using some of the language from the rationale.  The requirements address “quality of analysis”, which could depend on many factors, while the rationale uses the language “to address issues related to the quality of the analysis inputs used for Real-time Assessments”.

Texas RE recommends revising the phrase “with indication(s) of” used in proposed IRO-18-001, R2 and R4 as it is vague.  Presumably, the purpose of IRO-18-001, R2 and R4 appears to be to ensure that the results of the required evaluations of potential Real-time data quality discrepancies are communicated to System Operators so they can be incorporated into Real-time monitoring and Real-time assessments.  Accordingly, Registered Entities should be required to provide appropriate information from their data quality assessments to their System Operators.  Texas RE suggests substituting “relevant information and/or analyses concerning” for “with indication(s) of” to require appropriate, relevant information and/or any analyses of the quality of Real-time data be communicated to System Operators, not merely indications of data quality.

The reference to “with indications of” in the corresponding measures should also be revised along these lines.  However, the types of evidence identified in the measures satisfy the proposed “relevant information and/or analyses concerning” standard.

Rachel Coyne, Texas Reliability Entity, Inc., 10, 11/9/2015

- 0 - 0

David Jendras, Ameren - Ameren Services, 3, 11/9/2015

- 0 - 0

Hot Answers

Southern believes that the criteria in R1.1 should be limited to the BA/TOP’s ability to monitor and assess the current/expected condition of its BA/TOP area within the capabilities of its monitoring tools not including the criteria listed in R1.1.1-R1.1.4.

Each TOP has the inherent responsibility to protect the integrity of the system in its BA/TOP area and to not contribute or cause any system violations in adjacent BA/TOP areas.  In order to fulfill this responsibility, the BA/TOP performs monitoring through the information collected from the modeled facilities in its TOP areas to accurately assess the state of the system.  The BA/TOP is constantly evaluating the quality of data received to ensure it has an accurate state of system conditions to perform real time assessments.  To impose a new standard focusing on data quality would only serve as administrative in nature and would not provide any substantial increases in reliability.

Southern Company, Segment(s) 1, 6, 3, 5, 4/13/2015

- 0 - 0

This standard is too vague and needs additional clarification.  We support some of the comments from MRO.

- 0 - 0

Other Answers

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

Jeff Wells, On Behalf of: Grand River Dam Authority, , Segments 1, 3

- 0 - 0

Clarity is needed regarding how granular the Requirements are to the data points themselves. For example, is the Transmission Operator obligated in R3 to provide indication(s) of quality on a data point basis, or rather, may it be done as a collection of data points, grouping them as needed?  Of even greater concern, would the “actions to coordinate resolution” in R1 need to be performed on a per-data point basis as well? Hundreds of thousands of data points are involved in Real-time monitoring and Real-time Assessments, and the requirements in this standard must be written realistically to accommodate a high volume of data points which continue to increase.

In addition, AEP has a large volume of data provided by external entities. AEP would have little to no ability to “coordinate resolution of Real-time data quality discrepancies with the entity(ies) responsible for providing the data” as specified in R1.2, for this externally provided data.

Perhaps a re-ordering of the TOP-10-1 requirements could help the overall flow of the standard. For example, it may be preferable to have a Requirement for indications of quality (R3 for example) to precede a Requirement to have an Operating Process or Operating Procedure to address the quality of that data (R1 for example).

Thomas Foltz, AEP, 5, 11/2/2015

- 0 - 0

We do not agree with the need to create this standard, and the way the proposed standard is drafted (overly prescriptive and micro-managing). Please see our comments under Q1.

Leonard Kula, Independent Electricity System Operator, 2, 11/2/2015

- 0 - 0

Platte River (PRPA) like many smaller TOP’s does not have an EMS system capable of performing Real-Time Assessments.  To accomplish this task, we contract our Reliability Coordinator to run our Real-Time Assessments.  Platte River provides data points to the RC, who runs the Real-Time Analyses and then provides PRPA with the advanced applications.

PRPA does not have a concern with the intent of the standard, but requests that the drafting team address the possibility of relying on 3rd party contracts to perform Real-Time Assessments for entities that do not possess the ability to perform each of the requirements in this standard.

Without the ability to contract a 3rd party for these services, the financial burden of purchasing and installing a new EMS system capable of performing these tasks would easily reach into the millions of dollars. 

Tyson Archie, Platte River Power Authority, 5, 11/4/2015

- 0 - 0

We would like to see "quality" defined or clarrified. Also, we are not sure who is responsible for the quality of the data received from the interconnections. We also support some of the comments coming out of the MRO standards group.

Joe O'Brien, NiSource - Northern Indiana Public Service Co., 6, 11/4/2015

- 0 - 0

Amy Casuscelli, On Behalf of: Xcel Energy, Inc. - MRO, WECC, SPP RE - Segments 1, 3, 5, 6

- 0 - 0

R1 and R2: 

It can be very difficult to identify some of the real-time data quality problems listed in this Standard, particularly analog data that is not updating. Many current systems do not have the capability to easily detect this for all analogs, and adding this capability for all data points could require extensive Software, database, and/or Hardware (for performance reasons) changes that cannot be easily or quickly implemented.

As real-time telemetry becomes more de-centralized in the field and as we are required to rely more and more on data from other entities (via ICCP), it becomes more and more difficult to detect data that is out of range. Putting this requirement on an entity that has no control over the source of the data or how it is provided seems to put an unfair regulatory burden on that entity.

Most of the real-time data quality criteria seem focused on analog data, but incorrect digital data can have a greater impact on analysis results than incorrect/stale analog data. However, identifying non-updating digital data can be even more difficult than identifying non-updating analog data.

How do we prove to an auditor that we identified all instances of data with poor quality?

These requirements seemed focused on evaluating the quality of incoming real-time data. Are there any requirements for providing accurate quality codes with data?  For example:

Both ICCP and some RTU protocols support including quality codes with data values. For example, if an entity receiving ICCP data relies on these quality codes to at least partially determine the quality of a data point, then the received quality codes need to be accurate.

Both ICCP and some RTU protocols support including time-stamps of the most recent change of a data point value. Some systems use this received time-stamp when processing the data, and it can impact applications used by operators, including where a new alarm for that point appears in an EMS/SCADA alarm list. Receiving an incorrect time-stamp can negatively impact the information and results provided to an operator.

R3 and R4:

What is considered sufficient notification to an operator of real-time data quality problems? If quality codes are shown on EMS/SCADA displays, an operator may not look at the displays with data quality issues. But if alarms are generated to notify the operator, the increase in alarm volume may detract the operator’s attention from more important alarms.

Summarizing the quality of thousands of real-time measurements for an operator may not be something existing systems can easily do.  This may require software and possibly hardware additions to an EMS/SCADA.

R5:  There is no guidance provided for a Transmission Operator to create criteria to evaluate the quality of analysis used in its Real-time Assessments. If an auditor will be expected to review the criteria used by a Transmission Operator, the guidelines that will be provided to auditors for this purpose should be listed here.

R7:  With current EMS/SCADA architectures, it can be difficult to define what comprises the “alarm processor”. While requirements R1-R4 of this Standard may cover the quality of the telemetered inputs to the EMS/SCADA, there are many EMS/SCADA components used after that to make operators aware of alarms. It is not just a specific alarm processing program, but also includes things such as the EMS/SCADA data dissemination programs, the EMS/SCADA User Interface application, audible alarming capabilities, even the operator console hardware itself. Should this requirement be re-worded to make it clearly cover the ability of the system to make alarms available to operators and not imply it is limited it to a specific “program”?

VSLs:

R3 & R4:  It is not clear from the wording of the single VSL level (which is Severe) if a violation of this Standard is incurred only if there are NO indications of quality of real-time data.  If the meaning is to include situations where one or a few points with bad quality are missed (i.e., not notified to an operator) than assigning a Severe VSL seems inappropriate, and several levels of violations should be implemented.

R6:  Is it correct that a violation of this Standard is incurred only if there are NO indications provided to operators of poor quality of analysis results, and that missing some number of these instances is not a violation of this Standard? If the intent is to consider even a single miss a violation then assigning it a Severe VSL seems inappropriate, and several levels of violations should be implemented.

R7:  Is it correct that occasional failures of the independent alarm process monitor are not violations of this Standard?

MRO-NERC Standards Review Forum (NSRF), Segment(s) 3, 4, 5, 6, 1, 2, 9/9/2015

- 0 - 0

RC, BA & TOP entities currently have adequate tools for real-time monitoring and analysis.  The existing Standards adequately define what needs to be monitored by each entity without defining the tools.   Creating new requirements will not increase the reliability of the BES.

Additionally, some of the new proposed requirements (IRO-018-1 Req. 1, TOP-010-1 Req. 1) state:

Each RC/TOP/BA shall implement and Operating Process to address the quality of the Real-time data… the term quality is ambiguous and subjective.   This term needs to be defined.  Similar to Requirement 2, the terms indications of quality needs to be defined. If not defined, it could result in varying interpretations throughout the industry.

Lastly, the NERC Operating Reliability Subcommittee (ORS) has drafted a Reliability Guideline, “Loss of Real-Time Reliability Tools Capability / Loss of Equipment Significantly Affecting ICCP Data.” This guideline will help ensure that tools are adequate and if they are degraded for any reason, the potentially impacted entities are aware and can take action if needed.

PJM supports the comments submitted by the ISO/RTO Council Standards Review Committee.

- 5 - 0

R1.1 uses “but not limited to”.  That language is too open ended and cannot be audited or compliance limited. Compare it to R3.  ‘ But not Limited to” only belongs in a Measurement.

R2 is  ambiguous in whether a single data point of bad quality needs to be flagged or if the aggregate data is so bad that state estimator can’t solve.  Modern EMS systems incorporate data quality checks within their algorithms.  However, how this requirement is phrased will dramatically impact the compliance risk an organization faces.

Jonathan Appelbaum, 11/5/2015

- 0 - 0

The MidAmerican Energy Company (MEC) has concerns with redundancy and technical complications with the TOP-010 standard as proposed.  The data quality objective can be simplified into a single requirement in either TOP-010-1 or TOP-001-3 which is for entities to have tools or processes that consider data quality to reasonably assure a reasonably high confidence that the system is in a reliable state.  Existing Energy Management  Systems (EMS) and Real-Time Contingency Analysis (RTCA) tools already have this capability.

Redundancy:

The MEC recognizes that FERC directed the drafting team to address missing data quality issues based on the 2003 blackout report.  However, existing standards TOP-001-3 and TOP-003-3 already require effective monitoring and control which includes proper data quality.

As an example, R13 of TOP-001-3 sets clear requirements that a real-time assessment must be performed at least once every 30 minutes. That requirement includes the identification and consideration of data quality to provide successful assessment solutions at least once every 30 minutes.

TOP-001-3 R13 requires TOPs to have operating processes or procedures that address issues bad data detection and identifications that are likely to cause assessment failures such as non-convergence or invalid solutions.

All TOPs assessment tools already consider bad data detection and identification from embedded software algorithms which are pre-requisites for successful execution of SE/RTCA.  TOP’s engaged in monitoring the execution of their assessment tool(s) already address problems with data input quality and assessment quality.

Assessment tools must have robust data quality input and assessment capabilities to detect and identify problem(s) with any single piece of data (out of thousands of inputs) especially if that particular bad input (or limited set of bad input data) did NOT affect overall successful performance of the tool.

Technical Compliance Complications that Distort the Reliability Goal:

The zero defect nature of compliance, until fixed, drives unnecessary costly EMS / RTCA system upgrades without measureable system reliability improvements.  The proposed TOP-010 standards introduce vague and unclear formulations that will cause misunderstandings during compliance audits.  Therefore, it is better to revise TOP-010 to a single requirement or revise TOP-001-3 or TOP-003-3 with an additional simple requirement for entities to have tools or processes that considers data quality to reasonably assure a high confidence that the system is in a reliable state.

Assessment tools use thousands of input data points including analog measurements and switching device statuses.  Therefore, the reliability goal(s) are that the assessment tool has bad data detection and identification algorithms that allow the assessment tool to solve, and notify / log the system operator of bad data, and alarm if the bad data may compromise the assessment or solution.

Identifying vague input data issues such as “analog data not updated” or “data identified as suspect” is problematic from a compliance standpoint. Some Energy management Systems (EMS) simply do cannot identify all suspect data and therefore the zero defect compliance expectation to identify all suspect data or all bad analog data is technically infeasible.   The reliability goal is a high confidence assessment that the system is in a reliable state.  That is very different from the stated compliance zero defect standard as written to identify all “analog data not updated” or identify “all suspect data”.

Significant technical problems exist with the TOP-010 requirements when applied to input data received from other TOPs or RC’s (either directly or via ICCP). There are no technically feasible mechanism to detect for “manually entered statuses.  An example is detecting a manually entered “CLOSED” circuit breaker status whose actual status is “OPEN”, if such data was received via ICCP. 

TOP-010 R3 is unclearly defined as Transmission Operators would have major difficulty in coming up with a conclusion as to what is “the quality of data necessary to perform real-time assessment”.  At any moment in time, any specific measurement (or subset of measurements) might either be lost or be detected as “bad”. That does not necessarily mean that the real-time assessment would be inaccurate or invalid. The tool’s accuracy can be measured by other inherent quantitative indicators such as algebraic sum of allocation errors” or “confidence percentile”.  An aggregate reasonable confidence percentile measurement would be a sufficient system reliability objective reasonably proving the system was in a reliable state.

TOP-010 R5 introduces unclear terminology of “maintaining the quality of any analysis used in real-time assessment”.      

Darnez Gresham, Berkshire Hathaway Energy - MidAmerican Energy Co., 3, 11/5/2015

- 0 - 0

PGE thanks the drafting team for there efforts regarding the development of this proposed standard.  After meeting with the SMEs involved with the proposed standard, they've provided the following: 

SUMMARY

  • We recommend a “No” vote on TOP-010-1 at this time because we feel additional clarity is needed.

  • Submit comments on the following:

    • Requesting clarification on the meaning of analysis and Real-time Assessments.  (Human or machine.)

    • If R5 is addressing the knowledge or ability of operators, it belongs in PER-005, not here.

Angela Gaines, On Behalf of: Portland General Electric Co., WECC, Segments 1, 3, 5, 6

- 0 - 0

R1, R2, R3, & R4: Duke Energy questions the use of the term “analysis” in R2 and R4, attributable to the BA, but is not present in R1 and R3 that is attributable to the TOP. The use of the term analysis in this context suggests that the BA has some sort of responsibility to carry out analyses similar to that of the RC or TOP. We disagree with this premise. Also, we question why the term “analysis” is not present in R1 or R3. The TOP does in fact have responsibilities to carry out analyses, and this should be acknowledged in R1 and R3. Duke Energy suggests that all references to the BA performing an analysis be removed in all attributable requirements, and that analysis that are expected to be performed by the TOP be referenced in requirements attributable to it.

R5: We request further clarification on the use of the phrase “analysis inputs” in the Rationale of R5, as opposed to the use of the term “analysis” in the wording of R5. Is the use of “inputs” meaning other types of data or operational conditions that aren’t described in R1-R4? More clarification regarding what is meant by the phrase “analysis inputs” would be helpful.

R7: Duke Energy requests further explanation on what it meant by the use of the term “processor” in regards to the failure of a Real-time monitoring of an alarm processor. Is this referring to independent hardware that monitors EMS/SCADA or independent processes within the EMS system? Is separate hardware necessary, or will separate processes be sufficient? Should this be something that is housed outside of the EMS? We feel that an example of what is meant by independent (does this mean external?), as well as “processor” would enhance clarity in this requirement.

Duke Energy , Segment(s) 1, 5, 6, 4/10/2014

- 0 - 0

In part 1.1 of R1, if the bulleted list is intended to be an example list, then the examples should not be given part numbers but should be rolled up into the main sentence. If it is intended to be a minimum set of criteria, then “but not limited to” should be replaced with “at a minimum”.

In part 2.1 of R2, if the bulleted list is intended to be an example list, then the examples should not be given part numbers but should be rolled up into the main sentence. If it is intended to be a minimum set of criteria, then “but not limited to” should be replaced with “at a minimum”.

FMPA, Segment(s) , 11/9/2015

- 0 - 0

ATC supports the comments submitted by the MRO NSRF.

However, raise the following question:  Does having a process or procedure support your quality of Real-time data?  It’s not the process or procedure but rather what systems do you have in place to alert the TOP/BA that there is an issue with your data (R1.1 – R1.2).

 

R1 and R2:

o   It can be very difficult to identify some of the real-time data quality problems listed in this Standard, particularly analog data that is not updating.  Many current systems do not have the capability to easily detect this for all analogs, and adding this capability for all data points could require extensive Software, database, and/or Hardware (for performance reasons) changes that cannot be easily or quickly implemented.

As real-time telemetry becomes more de-centralized in the field and as we are required to rely more and more on data from other entities (via ICCP), it becomes more and more difficult to detect data that is out of range.  Putting this requirement on an entity that has

o   no control over the source of the data or how it is provided seems to put an unfair regulatory burden on that entity.

o   Most of the real-time data quality criteria seem focused on analog data, but incorrect digital data can have a greater impact on analysis results than incorrect/stale analog data.  However, identifying non-updating digital data can be even more difficult than identifying non-updating analog data.

o   How do we prove to an auditor that we identified all instances of data with poor quality?

o   These requirements seemed focused on evaluating the quality of incoming real-time data.  Are there any requirements for providing accurate quality codes with data?  For example:

§  Both ICCP and some RTU protocols support including quality codes with data values. For example, if an entity receiving ICCP data relies on these quality codes to at least partially determine the quality of a data point, then the received quality codes need to be accurate.

§  Both ICCP and some RTU protocols support including time-stamps of the most recent change of a data point value.   Some systems use this received time-stamp when processing the data, and it can impact applications used by operators, including where a new alarm for that point appears in an EMS/SCADA alarm list.  Receiving an incorrect time-stamp can negatively impact the information and results provided to an operator.

R3 and R4:

o   What is considered sufficient notification to an operator of real-time data quality problems?  If quality codes are shown on EMS/SCADA displays, an operator may not look at the displays with data quality issues.  But if alarms are generated to notify the operator, the increase in alarm volume may detract the operator’s attention from more important alarms.

o   Summarizing the quality of thousands of real-time measurements for an operator may not be something existing systems can easily do.  This may require software and possibly hardware additions to an EMS/SCADA.

R5:  There is no guidance provided for a Transmission Operator to create criteria to evaluate the quality of analysis used in its Real-time Assessments.  If an auditor will be expected to review the criteria used by a Transmission Operator, the guidelines that will be provided to auditors for this purpose should be listed here.

R7:  With current EMS/SCADA architectures, it can be difficult to define what comprises the “alarm processor”.  While requirements R1-R4 of this Standard may cover the quality of the telemetered inputs to the EMS/SCADA, there are many EMS/SCADA components used after that to make operators aware of alarms.  It is not just a specific alarm processing program, but also includes

·       things such as the EMS/SCADA data dissemination programs, the EMS/SCADA User Interface application, audible alarming capabilities, even the operator console hardware itself.  Should this requirement be re-worded to make it clearly cover the ability of the system to make alarms available to operators and not imply it is limited it to a specific “program”?

VSLs:

o   R3 & R4:  It is not clear from the wording of the single VSL level (which is Severe) if a violation of this Standard is incurred only if there are NO indications of quality of real-time data.  If the meaning is to include situations where one or a few points with bad quality are missed (i.e., not notified to an operator) than assigning a Severe VSL seems inappropriate, and several levels of violations should be implemented.

o   R6:  Is it correct that a violation of this Standard is incurred only if there are NO indications provided to operators of poor quality of analysis results, and that missing some number of these instances is not a violation of this Standard?  If the intent is to consider even a single miss a violation then assigning it a Severe VSL seems inappropriate, and several levels of violations should be implemented.

R7:  Is it correct that occasional failures of the independent alarm process monitor are not violations of this Standard?

Andrew Pusztai, 11/9/2015

- 1 - 0

 This standard creates a double jeopardy situation. Requirement R1 Part 1.2 of this standard specifies the TOP shall include actions to coordinate resolution of Real-time data quality discrepancies in its Operating Process or Operating Procedure. These actions are also required by proposed TOP-003-3 Requirement R5 Part 5.2 which requires a process to resolve data conflicts for the data required by the data specification in Requirement TOP-003-3 R3. If that data specification requires the provision of Real-time data, then TOP-003-3 Part 5.2 requires a process to resolve data conflicts and quality discrepancies with that Real-time data.

Suggested wording: R1. Each Transmission Operator shall implement an Operating Process or Operating Procedure to address the quality of the Real-time data, excluding Real-time data already addressed by TOP-003-3 R5 Part 5.2, necessary to perform its Real-time monitoring and Real-time Assessments.      

Scott McGough, Georgia System Operations Corporation, 3, 11/9/2015

- 0 - 0

CenterPoint Energy feels R1.1.2 (Analog data not updated within a predetermined time period) brings more of a compliance burden than a reliability benefit.  CenterPoint Energy has confidence System Operators investigate and communicate these issues upon suspicion; however, defining a predetermined time period for a data quality code check including each individual piece of data poses a threat to the System Operator’s focus monitoring important issues on the grid.  CenterPoint Energy also realizes there is a challenge in deciphering whether or not a value has simply not changed in a predetermined time period or if that value hasn’t updated.  CenterPoint Energy recommends the SDT clarify that 1.1.2 refers to the universe or a pre-defined subset of data and not specific to any one, individual piece of data.

John Brockhan, 11/9/2015

- 0 - 0

Hydro One does not support the proposed Reliability Standard TOP-010-1.  We also believe that these requirements are too prescriptive (the “how”) and is moving away from the result-based approach (the “what”). 

Oshani Pathirane, 11/9/2015

- 0 - 0

Jared Shakespeare, 11/9/2015

- 0 - 0

Comments:      These comments are submitted on behalf of the following PPL NERC Registered Affiliates (“PPL”): Louisville Gas and Electric Company, Kentucky Utilities Company and PPL Electric Utilities Corporation.  The PPL NERC Registered Affiliates are registered in two regions (RFC and SERC) for one or more of the following NERC functions: BA, DP, GO, GOP, IA, LSE, PA, PSE, RP, TO, TOP, TP, and TSP.

 

The PPL NERC Registered Affiliates believe if additional requirements are necessary for TOP’s and BA’s to address the quality of their Real-time data, then these requirements should be included in the proposed Reliability Standard TOP-003-3. Per TOP-003-3 (pending regulatory approval) , TOP’s and BA’s are required to maintain a documented specification for the data necessary to perform its Real-time monitoring and Real-time assessments including periodicity for providing data and a mutually agreeable process for resolving data conflicts. Therefore, adding additional requirements to TOP-003-3 to address the quality of the TOP and BA specified data is less of a compliance burden to stakeholders than creating a new standard.

 

If the SDT chooses to continue with the proposed TOP-010 standard, we request the sub-requirements R1.1.1 thru 1.1.4 and R2.1.1 thru 2.1.4 be removed from the proposed TOP-010 to allow entities the flexibility to write an Operating Process or Operating Procedure tailored to their system and their Reliability Coordinators specifications where applicable.

PPL NERC Registered Affiliates, Segment(s) 1, 3, 5, 6, 9/11/2015

- 0 - 0

The independent monitoring needs to be better clarified.  Does independent mean another system besides EMS?  We also believe that the terms quality and indicators are vague.

JEA, Segment(s) , 11/9/2015

- 0 - 0

Following are the same comments we provided on IRO-018-1 draft.  They are generally applicable to the proposed TOP-010-1 Standard also.

We do not believe the issues addressed by the FERC directive rise to the level of requiring a reliability standard.  The intent of the directive and the resulting actions to be taken by the various entities would be better served by an official Guideline rather than a generic standard.  Forcing this into a Standard requires a one-size fits all approach that is leading to varied interpretations on “quality” and “adequacy” and may not enhance reliability of the BES.

We believe the requirements in general could be improved to be more results based.  As written, they largely will only result in identifying deficiencies after the fact when doing event analysis.  An entity may have a process or procedure as required, but they could miss a piece of data or fail to identify fully the impact a quality issue may have upon their situational awareness.  Simply having the process does not result in increased reliability.

Most entities already have a process in place to alarm or indicate data quality as needed to maintain reliability.  Entities are already required to operate reliably, within SOLs and IROLs, etc.  The creation of this standard as written would serve only to document that process and put it under auditable enforcement – with no discernible impact to maintaining reliability.  In order to make this standard truly results based, there needs to be some identification of the quality level, or data quality thresholds that must be maintained in order for reliability to be maintained.  Then that level (or quality of the data measurements) must be maintained per the standard. 

We suggest that there needs to be more direction given by the Standard in a few areas.  One is that the applicable entity should be determining a data range, time periods, number of manually entered values, etc. that can degrade analysis to the point reliability is threatened (R1.1.1-R1.1.4). 

We also find it problematic when an entity may not “own” the data and is simply receiving a quality flag from a sender.  The RC for example may not receive an accurate quality flag or the quality flag is corrupted in translation over ICCP.  Also, there is no requirement that the measurement devices even be of a particular accuracy.  For example the quality threshold may be more narrow than the accuracy of the device. 

The use of the term “suspect” in R2.1.4 in TOP-010-1 could lead to an interpretation that the operator “should have suspected” the data was incorrect.  The word “suspect” is used in some EMS packages as an identifier for garbage or data that is suspect.  We recommend the word be evaluated and replaced.

R3 is very problematic in that it infers there is a level of in-adequacy that studies must not fall below when requiring a level of “quality” to be maintained.  This seems to be an attempt to not use the word “adequate”.  Without defining the required level of quality, there is no way an entity can be compliant.  Any entity may experience some reduced level of quality, but may still have acceptable performance from their studies without taking action to correct or mitigate the data.  As written, the entity would be in violation for simply failing to “maintain” the level of quality.  Perhaps R3 could be written this way:

R3. Each Reliability Coordinator shall implement an Operating Process or Operating Procedure to maintain an acceptable level of quality of any analysis used in its Real-time Assessments. The Operating Process or Operating Procedure shall include: [Violation Risk Factor: Medium] [Time Horizon: Same Day Operations, Real-time Operations]

3.1. Criteria for determining the minimum quality of any analysis used in its Real-time Assessments; and

3.2. Actions to resolve unacceptable quality deficiencies in any analysis used in its Real-time Assessments.

R4 seems to be applicable to situations where a tool is used to perform the RTA.  This can become problematic when the assessment is simply an evaluation done by reviewing data and determining that no changes on the system have occurred such as could occur with a TOP who has only a few BES elements and does not possess an EMS or RTCA style "tool".

We suggest that altering the phrase “independent alarm process monitor” could be beneficial.  As stated, the phrasing seems to suggest particular processes or tools rather than the intent to just have an “independent process” to monitor the alarming system.  We suggest the change as:

R5. Each Reliability Coordinator shall utilize a process to independently monitor its Real-time monitoring alarm process monitor in order to provide notification(s) to its System Operators when a failure of its Real-time monitoring alarm processor has occurred.

SPP Standards Review Group, Segment(s) 1, 3, 5, 11/9/2015

- 0 - 0

The SRC fails to see the reliability risk that this project is intending to address.  The August 14 Blackout as well as the 2011 Southwest Outage have thoroughly and exhaustively investigated, reported upon, and the root causes mitigated appropriately.  Therefore, pointing to the need for this project based on mitigated, historical events falls short of identifying the reliability risk that this is intended to “fix.”  If, for example, WECC continues to have a vested interest in further mitigating the 2011 Southwest Outage though standard development, we suggest this project be migrated into a regional standard for WECC.  Lastly, the SRC believes that, absent a Standard specific for tools, a RC, TOP, or BA would, in fact, have violations of existing operational Requirements if they do not provide adequate monitoring and tools to their operators (i.e. other “things” would happen).

Further, the Requirements as written, “…to address the quality of the Real-time data necessary…” are ambiguous, lack consensus about how to measure, and do not rise to the level of a NERC Standard.

This proposed project appears to be well-suited for a guideline document as opposed to a Standard.  As written, the SAR appears to intend to write a “how” not “what” Standard (i.e., it does not appear to be a results-based standard).  The SRC believes that the existing Standards (i.e., IRO, TOP and BAL) sufficiently define what needs to be monitored by each entity without defining the tools (i.e., without defining the “how”), which is appropriate.  In the alternative, this could be considered a process to be used for Certifying new entities, in line with a methodology developed by the ERO and registered entities for assessing adequacy of tools for addressing the “quality” of real-time data, for assurance that RC, BA and TOPs have the ability to monitor appropriately in accordance with existing, performance-based Standards Requirements.

The SRC notes that the tools available to operators have progressed well beyond those available in 2003.  If defined tools would have been hardcoded in a standard at that time, it would have limited focus and investment to those things that were in the standard.  Further, expanding on our point above, the SRC believes that the “what” regarding tools is more appropriately captured in the certification expectations for BAs, RCs, and TOPs.  Additionally, it would be appropriate for Regions to evaluate tools as part of the Registered Entity’s Inherent Risk Assessment (IRA).  This would include the scope of tools, backups, etc. and would provide an adaptable approach that would encourage continuous improvement.  

Additionally, the SRC recommends that NERC coordinate with the NATF to encourage inclusion of an ongoing “care and feeding” of tools evaluation and information sharing in their efforts with the provision that they make information on good practices available to the wider NERC community so that non-members can learn from the innovation of others.

Finally, to avoid these issues in the future and to support communicating to FERC when a Standard is not needed and another tool is more suitable, the SRC suggests that future SARs be voted on by industry to determine whether they should proceed as a Standards project or another means is a more appropriate method through which to achieve the SAR’s objective.

Standards Review Committee (SRC), Segment(s) 2, 11/9/2015

- 0 - 0

Andrea Jessup, On Behalf of: Bonneville Power Administration, WECC, Segments 1, 3, 5, 6

- 0 - 0

Some of the criteria listed in R1.1 is confusing. Data outside of prescribed data range would more likely indicate unusual system conditions rather that a data quality issue. We are currently unsure how the monitoring of these criteria could be implemented without additional software. Also, since implementation is part of the Measurement we would assume some logging of this implementation would be necessary to prove compliance which is also a process without and obvious means of accomplishment. There needs to be some substantial guidance or technical discussion providing information on what would be the expectations for utilities to be in compliance with this standard.

Jack Stamper, Clark Public Utilities, 3, 11/9/2015

- 0 - 0

Megan Wagner, 11/9/2015

- 0 - 0

Each requirement is unique to a particular functional entity.  Requirements can be eliminated by having each requirement refer to Transmission Operator and Balancing Authority.  

 

Similarly to IRO-018-1, language in R1.1 uses “but not limited to”. That language is too open ended and cannot be audited. Compare it to R3 use of “shall include”. “But not limited to” only belongs in a Measurement.

 

R2 is a bit ambiguous in whether a single data point of bad quality needs to be flagged or if the aggregate data is so bad that the state estimator can’t solve.

 

Suggest replacing the word “any” from R5 and R6 (relative to “any analysis”) and replacing with “reliability related” as “any” could be too broadly applied or interpreted.  Additionally, the term analysis is broad.  Standards related to Project 2014-03, approved through NERC as of this time, define such things as Real Time Assessments and Operational Planning Analysis.  It’s not exactly clear what analysis would be referring to.

Project 2009-02, Segment(s) 1, 0, 2, 3, 4, 5, 6, 7, 11/9/2015

- 0 - 0

R5: Need to clarify.  What is “quality of any analysis used”?  Need to clarify & better define.  How is SO notified &  Will SO need evidence?

R7:  Regarding “independent alarm process monitor (IAPM)”:  Need more clarity; is this separate from the SCADA data /SCADA system?  Is an IAPM separate from SCADA system?  Need more clarity.

 

Glenn Pressler, 11/9/2015

- 0 - 0

(1) The standard has significant burdens on System Operators to demonstrate compliance.  Requirements R2, R3, and R5.2 expect solution frequencies every 5 minutes, 12 times an hour, 288 times a day.  Does each quality deficiency occurring during this period need to be resolved?  A RTU communication issue lasting only 10 minutes would impact a minority number of these instances and generate unnecessary work for a system operator.  We believe the SDT should provide some qualifier for this requirement.

(2) We believe this standard has the potential to add to the System Operator’s workload and take their attention away from their duties of monitoring system reliability.  NERC has spent significant efforts to educate industry on situational awareness and human performance topics, including cognitive overload where too much stimuli affecting a System Operator will have negative effects on their performance. 

(3) The standard also introduces potential double jeopardy concerns between requirements TOP-010 R2 and BAL-005 R5.  At the time of the webinar, the SDT did not look into these possibilities.  However, NERC did later respond to the potential double jeopardy with the following statement:

“R5 in proposed BAL-005-1 is limited to information associated with Reporting ACE. R2 in proposed TOP-010-1 applies to Real-time data necessary to perform the BA's analysis functions and Real-time monitoring. These functions go beyond BAL-005 as described in the NERC Functional Model and existing and proposed TOP and IRO standards. Double jeopardy is never an issue because NERC Rules of Procedure include provisions for handling incidences of non-compliance with two or more requirements.  Specifically, NERC or the regional entity would issue a single penalty or sanction as called for in the Rules of Procedure (Appendix 4B, Section 2.5).”

We disagree with this approach, as the ROP is focused on being in violation of a single requirement or sub-requirement, not to separate requirements.  The issue of double jeopardy could occur when there is an event based on poor ACE data quality, which could also implicate TOP-010-1.  While TOP-010-1 contains additional data, it is possible to be in violation of two requirements for the same instance, which is the very definition of double jeopardy.  The NERC ROP does not provide relief for this situation.

(4) We have concerns with the potential impact to a System Operator’s general awareness of the system.  The System Operator will now be spending more time logging and performing actions strictly for compliance instead of BES Operation activities.   While we understand that the proposed standard allows the entity to determine the amount of operator action needed, can this be similarly defined in a Process or Procedure?   We have concerns that an auditor may not interpret the standard to allow other employees to mitigate any data or analyze errors, such as an EMS Engineer or other support personnel.  We request that the SDT consider revising the standard to clarify that a System Operator does not specifically have to be the one who mitigates such issues.  Furthermore, how does the SDT expect entities to show compliance with “implementation” of their Process of Procedure?

(5) The proposed standard includes requirements that should enhance, not detract, from the System Operator’s situational awareness since it is based on recommendations from the RTBPTF report. The SDT is mindful that System Operators need to remain focused on relevant real-time information while carrying out their duties. The proposed requirements should provide entities the flexibility to determine which operating personnel carry out required actions. Implementation could be demonstrated through evidence that the Operating Process or Procedure is used for its intended purpose. This evidence which might include checklists, operator logs, or operations support logs, for example.

(6) Compliance with the proposed requirements is not evaluated by counting quality codes on data points. The measures, VRFs, and VSLs are constructed to evaluate the capability-based performance requirements, as described in section 2.4 of the SPM.  This section states that Capability-based Requirements are defined capabilities needed by one or more entities to perform reliability functions which can be measured by demonstrating that the capability exists as required.

ACES Standards Collaborators - Real-time Project, Segment(s) 1, 4, 5, 11/9/2015

- 0 - 0

Jennifer Losacco, On Behalf of: NextEra Energy - Florida Power and Light Co., FRCC, Segments 1

- 0 - 0

ReliabilityFirst offers the following comments for consideration:

 

  1. Requirement R3 and R4

    1. It is unclear as to what the phrase “indication(s) of the quality of the Real-time data” is referring to.  RF requests clarification on the term “indications” and what this involves.

       

    2. Also, since the System Operators work for the respected TOP or BA, it is unclear whom at the respected TOP or BA will be providing “indications” to the System Operators.  As written, the System Operators (working for the TOP or BA) could provide indications to themselves.  This does not seem to be the intent of the Requirement.

  2. Requirement R6

    1. It is unclear as to what the phrase “indication(s) of the quality of any analysis…” is referring to.  RF requests clarification on the term “indications” and what this involves.

       

    2. Also, since the System Operators work for the TOP, it is unclear whom at the TOP will be providing “indications” to the System Operators.  As written, the System Operators (working for the TOP) could provide indications to themselves.  This does not seem to be the intent of the Requirement.

Anthony Jablonski, ReliabilityFirst , 10, 11/9/2015

- 0 - 0

Comments: ERCOT reiterates its comments above as applicable to TOP-010-1.  Should NERC continue this project, however, ERCOT provides the following comments by requirement:

 

Requirements R1 and R3/Requirements R2 and R4

 

ERCOT respectfully recommends that requirements R1 and R3 and Requirements R2 and R4 be combined.  Because the need to address data issues generally arises as a result of a data indicator or the need for manual data intervention by system operators, the value of a process to address such issues without the context of time or need is significantly diminished.  Hence, ERCOT proposes the following:

 

R1. Each Transmission Operator shall provide its System Operators with indication(s) of the quality of Real-time data necessary to perform its Real-time monitoring and Realtime Assessments. [Violation Risk Factor: Medium] [Time Horizon: Real-time Operations]

 

R1.1 The Transmission Operator shall initiate actions to coordinate resolution of Real-time data quality discrepancies with the entity(ies) responsible for providing the data when failure or degradation is indicated.

 

R2. Each Balancing Authority shall provide its System Operators with indication(s) of the quality of Real-time data necessary to perform its analysis functions and Real-time monitoring. [Violation Risk Factor: Medium] [Time Horizon: Real-time Operations]

 

R2.1 The Balancing Authority shall initiate actions to coordinate resolution of Real-time data quality discrepancies with the entity(ies) responsible for providing the data when failure or degradation is indicated.

 

Requirements R5 and R6

 

ERCOT respectfully recommends that requirements R5 and R6 be combined.  Because the need to address issues with real-time analyses generally arises as a result of an indicator that a particular analysis did not complete, is offline or there is a need for manual intervention by system operators, the value of a process to address such issues without the context of time or need is significantly diminished.  Additionally, the availability of back up or offline processes for real-time analyses mitigates the risks associated with an issue or outage of analysis capabilities.  Hence, ERCOT proposes the following:

 

R3. Each Transmission Operator shall provide its System Operators with indication(s) of the tool(s) used in its Real-time monitoring and Real-time Assessments are functioning as intended. [Violation Risk Factor: Medium] [Time Horizon: Real-time Operations]

 

R3.1 The Transmission Operator shall initiate actions to resolve any issues internally and to coordinate resolution of any data issues that are impacting such tool(s) with entity(ies) responsible for providing data inputs to such tool(s) when failure or degradation is indicated.

Elizabeth Axson, 11/9/2015

- 0 - 0

R1 and R2: The requirements are vague as to what constitutes quality.  Do we consider out of tolerance? High value? Low value? What is too high? What is too low? 

R3 and R4: If quality alarms are generated to alert the operator, the increase in alarm volume may distract the operator from more important alarms.  If quality codes are shown on the EMS/SCADA displays, an operator may not look at or notice the displays with data quality issues.

Summarizing the quality of thousands of real-time measurements for an operator may not be something existing systems can easily do.  This may require software and possibly hardware additions to an EMS/SCADA.

R5:  TOP-001-3 R13 requires that a real-time assessment is performed at least once every 30 minutes.  In order to resolve any issues with the quality of analysis for the real-time assessment outside of normal business hours may require staff to come into the office to resolve which may take more than 30 minutes.  This would put an entity out of compliance with TOP-001-3, unless staffing is increased which may not be feasible.

There is no guidance provided to create criteria to evaluate the quality of analysis used in Real-time Assessments.  There could be discrepancies between an auditor and an entity over what is acceptable criteria.  Guidelines that will an auditor could be expected to review should be listed.

- 0 - 0

The criterion specified in R1.1 and R1.2 is too prescriptive. The requirements as written are requiring System Operators to monitor the quality of all data specified per proposed TOP-003-3 R1. In a Real-time system there are thousands of data points used and having a few of those outside prescribed data range or not updated within a predetermined time period may have no impact on BES reliability. Requiring System Operators to track the quality of all data can be a distraction and an unnecessary burden. ITC believes the intent of the standard is for entities to pay attention to quality of certain pre-identified data used in Real-time monitoring and analysis.  However, the future standard TOP-003-3 will result in this requirement being applied to all data used in Real-time monitoring and analysis. Transmission Operators are required to perform a Real-time assessment and these assessments most commonly utilize tools which are designed to reduce dependencies on bad, invalid, or suspect data therefore placing a requirement for evaluating invalid or suspect data in Real-time does not provide any reliability benefit.

 

The proposed TOP-001-3 R1 requires that each Transmission Operator (TOP) shall act to maintain the reliability of its Transmission Operator Area via its own actions or by issuing Operating Instructions. Proposed TOP-001-3 R10 requires TOP to determine SOL exceedances and TOP-001-3 R12 requires TOP to not operate outside IROL for more associated IROL Tv.  These requirements together inherently imply that the Transmission Operator should ensure quality of data used in Real-time to get the desired outcome from the Real-time Assessment which is to maintain reliability of its area by monitoring SOLs and IROLs and taking appropriate actions. The proposed TOP-010-1 R1 seems to be specifying ‘How to’ comply with these requirements which does not meet the result base standard practice. In addition, the rationale for R13 in proposed TOP-001-3 states “The Transmission Operator’s Operating Plan will describe how to perform the Real-time Assessment. The Operating Plan should contain instructions as to how to perform Operational Planning Analysis and Real-time Assessment with detailed instructions and timing requirements and how to adapt to conditions where processes, procedures, and automated software systems are not available (if used)”.  Thus, the actions needed on data quality are already expected in the Operating Plan to ensure the desired outcome. Therefore, a new requirement for data quality may be redundant.

 

In summary, it is appropriate to have an Operating Procedure to maintain and address quality of data used in Real-time Assessment.  However, the monitoring and analysis of data quality for all data in Real-time is not practical and does not add value to reliability. Real-time Assessment tools used by TOPs have processes to manage bad data and provide valid results.  Data quality should be monitored outside of the Real-time operator environment wherein staff other than System Operators can analyze patterns of data to identify data quality issues that truly impact Real-time analysis. The measures specified in TOP-010-1 indicate dated operator logs and voice recordings as evidence for compliance which will require the System Operator to monitor quality of all data.  Also, the expectation of the System Operator to review data quality in Real-time for every data point is overkill.

 

TOP-010-1 R3 is redundant when compared to TOP-010-1 R6. R6 is requiring an indication of quality of analysis used in Real-time Assessment wherein R3 is requiring indication of quality of data used in Real-time Assessment. The quality of analysis used for Real-time Assessments may be an indicator of quality of data used in Real-time Assessment thus having a requirement on both is redundant and can result in multiple noncompliance incidents for a single problem. For example, a single bad Real-time data point may constitute a violation of TOP-010-1 R3 and since this data is used in Real-time Assessment it may also cause a violation of TOP-010-1 R6.

 

ITC supports TOP-010-1 R7 having an independent processor to monitor Real-time alarm system because it provides value due to the heavy reliance on alarms by System Operators for situational awareness. However, the standard should specify if the unavailability of independent processor creates a violation of standard requirements. Although, the implementation plan of 12 months for R7 is unrealistic as compliance with this requirement may require entities to procure and implement new tools which is a lengthy process.

Meghan Ferguson, 11/9/2015

- 0 - 0

Texas RE recommends adding the Balancing Authority (BA) function to the applicability of R5 and R6.  While it could be argued that a BA does not have to perform Real-time Assessments per a Reliability Standard requirement (in other words explicitly stated as required to do Real-time Assessments), its actions to maintain frequency are effectively as assessment based on Real-time data. 

Texas RE suggests using language from the rational to make TOP-010-1 R5 and R6 clearer.  The requirements address “quality of analysis”, which could depend on many factors, while the rationale uses the language “to address issues related to the quality of the analysis inputs used for Real-time Assessments”.

Texas RE recommends revising the phrase “with indication(s) of” used in proposed TOP-10-1, R3, R4, and R6 as it is vague.  The purpose of TOP-10-1, R3, R4, and R6 appears to be to ensure that the results of the required evaluations of potential Real-time data quality discrepancies are communicated to System Operators so that information regarding such data discrepancies could potentially be incorporated into Real-time monitoring, analysis functions, and Real-time assessments.  Accordingly, registered entities should be required actually to provide the actual information from their data quality assessments to their System Operators.  Texas RE would suggest substituting “relevant information and/or analyses concerning” for “with indication(s) of” to require appropriate, relevant information and/or any analyses of the quality of Real-time data be communicated to System Operators, not merely indications of data quality. 

The reference to “with indications of” in the corresponding measures should also be revised along these lines.  However, the types of evidence identified in the measures satisfy the proposed “relevant information and/or analyses concerning” standard.

Rachel Coyne, Texas Reliability Entity, Inc., 10, 11/9/2015

- 0 - 0

R4 states "Each Balancing Authority shall provide its System Operators with indication(s) of the quality of Real-time data necessary to perform its analysis functions and Real-time monitoring."  Are the analysis functions limited to real-time analysis, or could this be interpreted to apply to study and after the fact analysis? We believe that this needs to be clear.

R5. What does "maintain the quality" mean?  What if the quality of the analysis is not currently what it should be, then this requirement appears to preclude improving that quality.

R6 requires "indication(s) of the quality of any analysis"; how is quality defined?  We believe this is very ambiguous as written and for us internal discussions resulted in multiple opinions.  We believe that the term Quality needs to be concisely defined within the requirement.

 

David Jendras, Ameren - Ameren Services, 3, 11/9/2015

- 0 - 0

Hot Answers

Southern believes that the implementation date should be pushed back to allow time for the industry to determine the appropriate technology that is sufficient for each entity’s operations.  We also believe that in order to fully comply with the proposed standard, enough time should be allowed for the industry to update their current procedures and/or to create acceptable procedures, provide training to the appropriate System Operators and allow sufficient time for the entities to determine the technology available that is available and appropriate to support their operations, along with the required functionality.

Southern Company, Segment(s) 1, 6, 3, 5, 4/13/2015

- 0 - 0

This standard is too vague and needs additional clarification.  We support some of the comments from MRO.

- 0 - 0

Other Answers

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

Jeff Wells, On Behalf of: Grand River Dam Authority, , Segments 1, 3

- 0 - 0

AEP cannot determine the adequacy of the proposed implementation plan until more clarity is provided on the obligations themselves. If it is determined that the obligations *are* very granular (i.e. “per data point”), the implementation plans would be insufficient.

Thomas Foltz, AEP, 5, 11/2/2015

- 0 - 0

We do not agree with the need for the standard, and therefore do not agree with the proposed implementation plan.

Leonard Kula, Independent Electricity System Operator, 2, 11/2/2015

- 0 - 0

Tyson Archie, Platte River Power Authority, 5, 11/4/2015

- 0 - 0

Joe O'Brien, NiSource - Northern Indiana Public Service Co., 6, 11/4/2015

- 0 - 0

Xcel Energy feels that the implementation timeline is too short.  We support the comments of the MRO NSRF recommending a 60 month implementation to allow entities adequate time to assess tools and complete necessary upgrades.

Amy Casuscelli, On Behalf of: Xcel Energy, Inc. - MRO, WECC, SPP RE - Segments 1, 3, 5, 6

- 0 - 0

The implementation plan is too short if entities need to specify, order and deploy new or modified Energy Management Systems (EMS) that can monitor, track, and report real-time data quality and availability in accordance with IRO-018 and TOP-010. Entities should be given an implementation plan with up to 60 months for new EMS software and systems.

The key is to allow entities the proper time to assess their tools and complete the right upgrades once. While prompt actions are good, forcing entities to assess, order, and deploy equipment in 12 or 18 months will lead to errors and possibly more risk of serious outages and problems in the short term.

The standard objective needs to be modified to a feasible reliability objective such as the assessment provides a reasonably high confidence interval that the system is in a reliable state. TOPs and BAs should be given much more time to make appropriate changes to their tools and EMS systems and to test their capabilities to detect and implement operating plans to respond to bad data detection and identification. The time needed to modify, specify, install,   adjust and test systems or tool to meet the proposed standard should be, at a minimum, 3 to 5 years.

MRO-NERC Standards Review Forum (NSRF), Segment(s) 3, 4, 5, 6, 1, 2, 9/9/2015

- 0 - 0

PJM does not support the proposed standards for the reasons noted in 1 and 2 above.

- 5 - 0

Jonathan Appelbaum, 11/5/2015

- 0 - 0

The standard objective needs to be modified to a feasible reliability objective such as the assessment provides a reasonably high confidence interval that the system is in a reliable state.  TOPs and BAs should be given much more time to make appropriate changes to their tools and EMS systems and to test their capabilities to detect and implement operating plans to respond to bad data detection and identification. The time needed to modify, specify, install,   adjust and test systems or tool to meet the proposed standard should be, at a minimum, 3 to 5 years.

Darnez Gresham, Berkshire Hathaway Energy - MidAmerican Energy Co., 3, 11/5/2015

- 0 - 0

Angela Gaines, On Behalf of: Portland General Electric Co., WECC, Segments 1, 3, 5, 6

- 0 - 0

Duke Energy is not in favor of the proposed 12 months and 18 months staggered implementation plan. In one of our previous comments, we requested that additional information be provided regarding the what is meant by the use of the terms alarm process monitor. If this alarm process monitor is something that would necessitate an entity to go out and procure something that is does not currently own, then additional time would be needed. The timeframe of 18 months for all requirements seems more appropriate.

Duke Energy , Segment(s) 1, 5, 6, 4/10/2014

- 0 - 0

If TOP-003-3 is approved at the same time or after TOP-010-1, then the result of the implementation plan as drafted is that requirements to have quality data become effective at the same time as requirements that could cause the TOP and BA to be seeing new data for the first time. R5 of TOP-003-3 could result in a large volume of new data, so more time should be afforded to the receiving TOP and BA to become familiar with and begin utilizing that new data. We recommend the timeframes for implementation of TOP-010-1 be modified to be 18 months and 24 months, at a minimum, to allow for separation from TOP-003-3 R5. A section could be added that addresses a scenario where TOP-003-3 is approved well before TOP-010-1.

FMPA, Segment(s) , 11/9/2015

- 0 - 0

ATC supports the comments submitted by the MRO NSRF as it relates to TOP-010-1.

The implementation plan is too short if entities need to specify, order and deploy new or modified Energy Management Systems (EMS) that can monitor, track, and report real-time data quality and availability in accordance with IRO-018 and TOP-010.  Entities should be given an implementation plan with up to 60 months for new EMS software and systems.

 

The key is to allow entities the proper time to assess their tools and complete the right upgrades once.  While prompt actions are good, forcing entities to assess, order, and deploy equipment in 12 or 18 months will lead to errors and possibly more risk of serious outages and problems in the short term.

Andrew Pusztai, 11/9/2015

- 1 - 0

Scott McGough, Georgia System Operations Corporation, 3, 11/9/2015

- 0 - 0

John Brockhan, 11/9/2015

- 0 - 0

Oshani Pathirane, 11/9/2015

- 0 - 0

Jared Shakespeare, 11/9/2015

- 0 - 0

PPL NERC Registered Affiliates, Segment(s) 1, 3, 5, 6, 9/11/2015

- 0 - 0

JEA, Segment(s) , 11/9/2015

- 0 - 0

Based on the proposed standards, 12 months should be sufficient time to simply develop a written procedure and ensure operators are knowledgeable.  However, depending on what the final version of the standard looks like, it may be impossible to meet some of the resulting requirements unless systems are replaced.  In that case, 36 months may be required.

SPP Standards Review Group, Segment(s) 1, 3, 5, 11/9/2015

- 0 - 0

Standards Review Committee (SRC), Segment(s) 2, 11/9/2015

- 0 - 0

Andrea Jessup, On Behalf of: Bonneville Power Administration, WECC, Segments 1, 3, 5, 6

- 0 - 0

Jack Stamper, Clark Public Utilities, 3, 11/9/2015

- 0 - 0

Megan Wagner, 11/9/2015

- 0 - 0

Project 2009-02, Segment(s) 1, 0, 2, 3, 4, 5, 6, 7, 11/9/2015

- 0 - 0

Glenn Pressler, 11/9/2015

- 0 - 0

(1) The implementation plan is too short if entities need to specify, order, and deploy a new or modified Energy Management System (EMS) that can monitor, track, and report real-time data quality and availability in accordance with IRO-018 and TOP-010.  Entities should be given an implementation plan with up to 60 months for new EMS software and infrastructure.

(2) The key is to allow entities adequate time to assess their tools and complete the right upgrades once.  While prompt actions are good, forcing entities to assess, order, and deploy equipment in 12 or 18 months will lead to errors and possibly more risk of serious outages and problems in the short-term.

(3) In the alternative, if the SDT determines that it will not extend the implementation to 60 months, we ask the SDT to consider making all requirements effective after 18 months.  Staggered effective dates has caused significant and unnecessary implementation issues, such as the confusion that occurred with implementing PRC-005 and its various requirements.

ACES Standards Collaborators - Real-time Project, Segment(s) 1, 4, 5, 11/9/2015

- 0 - 0

Jennifer Losacco, On Behalf of: NextEra Energy - Florida Power and Light Co., FRCC, Segments 1

- 0 - 0

Anthony Jablonski, ReliabilityFirst , 10, 11/9/2015

- 0 - 0

Comments: ERCOT's comments above notwithstanding, the proposed implementation plan appears reasonable.

Elizabeth Axson, 11/9/2015

- 0 - 0

12 months may be too short depending on the capabilities of existing systems.  More time may be needed to assess the existing capabilities of the EMS/SCADA system and if new systems are needed, time will be required to specify, order and deploy a new EMS system.

- 0 - 0

ITC supports TOP-010-1 R7 having an independent processor to monitor Real-time alarm system because it provides value due to the heavy reliance on alarms by System Operators for situational awareness. However, the standard should specify if the unavailability of independent processor creates a violation of standard requirements. Although, the implementation plan of 12 months for R7 is unrealistic as compliance with this requirement may require entities to procure and implement new tools which is a lengthy process.

Meghan Ferguson, 11/9/2015

- 0 - 0

Texas RE is concerned the Implementation Plan allows for an increase in risk to the BES if quality is not already being addressed.  To ensure reliable operations, Texas RE suggests decreasing the Implementation plan to a more reasonable time period such as the first day of the first quarter after approval for all requirements except R7, which requires TOPs and BAs to utilize an alarm process monitor.   Twelve months is not an unreasonable time for the development of an independent alarm process monitor. 

Rachel Coyne, Texas Reliability Entity, Inc., 10, 11/9/2015

- 0 - 0

David Jendras, Ameren - Ameren Services, 3, 11/9/2015

- 0 - 0

Hot Answers

Southern believes that the VRFs and VSLs for the proposed standards are too high and should be modified.

Southern Company, Segment(s) 1, 6, 3, 5, 4/13/2015

- 0 - 0

- 0 - 0

Other Answers

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

John Fontenot, 9/24/2015

- 0 - 0

Jeff Wells, On Behalf of: Grand River Dam Authority, , Segments 1, 3

- 0 - 0

The team may want to consider using a more gradient-based approach for R1, R2, and R5, and using more than two VSL categories (driven by the number of elements not considered). If the requirements continue to use two VSL categories only, the High VSL should instead state “excluded at least one but not all of the elements…”

 

Thomas Foltz, AEP, 5, 11/2/2015

- 0 - 0

We do not agree with the need for the standard, and therefore do not agree with the proposed VRFs and VSLs.

Leonard Kula, Independent Electricity System Operator, 2, 11/2/2015

- 0 - 0

Tyson Archie, Platte River Power Authority, 5, 11/4/2015

- 0 - 0

Joe O'Brien, NiSource - Northern Indiana Public Service Co., 6, 11/4/2015

- 0 - 0

Xcel Energy believes that the proposed VSLs are not appropriate.  The full spectrum of VSLs (Low/Med/High/Severe) should be utilized for each requirement, and that full clarification of what quantifies a violation at each severity level should be disseminated.

Amy Casuscelli, On Behalf of: Xcel Energy, Inc. - MRO, WECC, SPP RE - Segments 1, 3, 5, 6

- 0 - 0

The binary approach to the VSLs seems too severe.  Suggest that the drafting team consider revising the VSLs to utilize moderate, high, and then severe if the entity missed one, two, three, or finally all data quality elements.

R3 & R4:  It is not clear from the wording of the single VSL level (which is Severe) if a violation of this Standard is incurred only if there are NO indications of quality of real-time data.  If the meaning is to include situations where one or a few points with bad quality are missed (i.e., not notified to an operator) than assigning a Severe VSL seems inappropriate, and several levels of violations should be implemented.

R6:  Is it correct that a violation of this Standard is incurred only if there are NO indications provided to operators of poor quality of analysis results, and that missing some number of these instances is not a violation of this Standard?  If the intent is to consider even a single miss a violation then assigning it a Severe VSL seems inappropriate, and several levels of violations should be implemented.

R7:  Is it correct that occasional failures of the independent alarm process monitor are not violations of this Standard?

The standard objective needs to be modified to a feasible reliability objective such as the assessment provides a reasonably high confidence interval that the system is in a reliable state. Vague and unclear definitions will lead to significant audit discrepancies as to what appropriate measures are when it comes to implementation of operating processes/procedures.

MRO-NERC Standards Review Forum (NSRF), Segment(s) 3, 4, 5, 6, 1, 2, 9/9/2015

- 0 - 0

- 5 - 0

The VSL could be utilize to mitigate the compliance for R2 and other 24/7/365 requirements.  The VSL for data quality could be stepped to percentage of points with bad quality, or duration.  The most severe would be data quality that prevents the EMS from solving.

Jonathan Appelbaum, 11/5/2015

- 0 - 0

The standard objective needs to be modified to a feasible reliability objective such as the assessment provides a reasonably high confidence interval that the system is in a reliable state.  Vague and unclear definitions will lead to significant audit discrepancies as to what appropriate measures are when it comes to implementation of operating processes/procedures.

Darnez Gresham, Berkshire Hathaway Energy - MidAmerican Energy Co., 3, 11/5/2015

- 0 - 0

Angela Gaines, On Behalf of: Portland General Electric Co., WECC, Segments 1, 3, 5, 6

- 0 - 0

Duke Energy , Segment(s) 1, 5, 6, 4/10/2014

- 0 - 0

FMPA, Segment(s) , 11/9/2015

- 0 - 0

ATC supports the comments submitted by the MRO NSRF as it relates to TOP-010-1. 

 

The binary approach to the VSLs seems too severe.  Suggest that the drafting team consider revising the VSLs to utilize moderate, high, and then severe if the entity missed one, two, three, or finally all data quality elements.

 

o   R3 & R4:  It is not clear from the wording of the single VSL level (which is Severe) if a violation of this Standard is incurred only if there are NO indications of quality of real-time data.  If the meaning is to include situations where one or a few points with bad quality are missed (i.e., not notified to an operator) than assigning a Severe VSL seems inappropriate, and several levels of violations should be implemented.

o   R6:  Is it correct that a violation of this Standard is incurred only if there are NO indications provided to operators of poor quality of analysis results, and that missing some number of these instances is not a violation of this Standard?  If the intent is to consider even a single miss a violation then assigning it a Severe VSL seems inappropriate, and several levels of violations should be implemented.

R7:  Is it correct that occasional failures of the independent alarm process monitor are not violations of this Standard?

Andrew Pusztai, 11/9/2015

- 1 - 0

Scott McGough, Georgia System Operations Corporation, 3, 11/9/2015

- 0 - 0

CenterPoint Energy feels the VSLs for R1, R2, and R5 do not match the intended meaning in the language of the Requirements (implementation).  It appears the focus is more on exclusion of criteria during the development phase of Operating Processes and Procedures.  CenterPoint Energy feels there are developmental phases of Operating Processes and Procedures and implantation phases, and perhaps the Requirements should be separated to reflect each.  In doing so, the VSLs could and should be more balanced, in both instances, from Lower VSL to Severe VSL and not so heavily weighted for documentation deficiencies.     

John Brockhan, 11/9/2015

- 0 - 0

Oshani Pathirane, 11/9/2015

- 0 - 0

Jared Shakespeare, 11/9/2015

- 0 - 0

PPL NERC Registered Affiliates, Segment(s) 1, 3, 5, 6, 9/11/2015

- 0 - 0

JEA, Segment(s) , 11/9/2015

- 0 - 0

Could it not be a lower VSL for R1 on IRO-018-1 if only one element was missing, then a medium VSL if two elements were missing, then Severe if more than two were missing?

SPP Standards Review Group, Segment(s) 1, 3, 5, 11/9/2015

- 0 - 0

Standards Review Committee (SRC), Segment(s) 2, 11/9/2015

- 0 - 0

Andrea Jessup, On Behalf of: Bonneville Power Administration, WECC, Segments 1, 3, 5, 6

- 0 - 0

Jack Stamper, Clark Public Utilities, 3, 11/9/2015

- 0 - 0

Megan Wagner, 11/9/2015

- 0 - 0

Project 2009-02, Segment(s) 1, 0, 2, 3, 4, 5, 6, 7, 11/9/2015

- 0 - 0

Glenn Pressler, 11/9/2015

- 0 - 0

The SDT should consider revising the VSLs to be on a graduated scale.  Binary treatment of these requirements is improper and leads to higher dollar penalties for violations than are not commensurate with the risks to reliability.

ACES Standards Collaborators - Real-time Project, Segment(s) 1, 4, 5, 11/9/2015

- 0 - 0

Jennifer Losacco, On Behalf of: NextEra Energy - Florida Power and Light Co., FRCC, Segments 1

- 0 - 0

Anthony Jablonski, ReliabilityFirst , 10, 11/9/2015

- 0 - 0

Comments: As the proposed requirements in IRO-018 and TOP-010 are primarily administrative in nature, ERCOT does not support the approval of VSLs that are high and severe.  Administrative requirements regarding operating processes should be considered a low VSL; alarming or other indicator activity should be considered for a VSL no higher than medium.

Elizabeth Axson, 11/9/2015

- 0 - 0

- 0 - 0

Refer to comments submitted for question #3.

Meghan Ferguson, 11/9/2015

- 0 - 0

Texas RE recommends revising the VSLs for proposed IRO-18-001, R1 and TOP-10-1, R1 and R2.  Specifically, the distinction between a High VSL and a Severe VSL for each of these requirements needs clarification as to how the use of subparts establishing the various required elements that must be included within the criteria for evaluating Real-time data quality discrepancies in Parts 1.1 and 2.1, respectively, will be addressed. 

The current VSLs for each of these three requirements could be read to assign only a High VSL to a registered entity that has: (1) only adopted one of the four required criteria elements in Part 1.1 (or Part 2.1 for TOP-10-1, R2) for evaluating potential Real-time data quality discrepancies; and (2) has not adopted any actions to coordinate the resolution of Real-time data quality discrepancies as required under Part 1.2 (or Part 2.2 for TOP-10-1, R2).  For example, the High VSL category for TOP-10-1, R1 could potentially apply to a Registered Entity that adopts criteria for evaluating data outside of a prescribed data range, but fails to adopt similar criteria for analog data that is not updated within a predetermined time period, data entered manually to override telemetered information, data otherwise identified as invalid or suspect, as well as fails to specify any actions to coordinate the resolution of Real-time data quality discrepancies with the entity responsible for provide the data. 

Texas RE suggests a better approach would be to specify that a High VSL for proposed IRO-18-001, R1 and TOP-10-1, R1 and R2 would apply to Registered Entities that have failed to adopt one or more of the required criteria in Parts 1.1 or 2.1, respectively, or have failed to adopt actions to address Real-time data discrepancies as required in Parts 1.2 or 2.2, respectively.  The Severe VSL category would then be reserved for instances in which a Registered Entity has failed to (1) adopt one or more of the required criteria for evaluating Real-time data quality discrepancies and (2) adopt actions to coordinate resolution of Real-time data quality discrepancies.  To use the previous example regarding the VSLs for TOP-10-1, R1, a Registered Entity that adopts criteria for evaluating data outside of a prescribed data range, but fails to adopt similar criteria for analog data that is not updated within a predetermined time period, data entered manually to override telemetered information, data otherwise identified as invalid or suspect, as well as fails to specify any actions to coordinate the resolution of Real-time data quality discrepancies would now be subject to a Severe VSL. 

This approach would align the VSLs for IRO-18-001, R1 and TOP-10-1, R1 and R2 with the VSLs for other requirements in the proposed standards that do not have specifically required criteria elements.  For example, under TOP-10-1, R5, the High VSL category applies to a Registered Entity if it does not establish (1) criteria for evaluating the quality of any analysis under in its Real-time assessments; or (2) actions to resolve quality deficiencies.  In turn, the Severe VSL category under TOP-10-1, R5 is applicable to Registered Entity that has failed to both establish criteria for evaluating and actions to resolve quality deficiencies.

Rachel Coyne, Texas Reliability Entity, Inc., 10, 11/9/2015

- 0 - 0

No, we ask to leave them as currently written for TOP-010-1 requirements.

David Jendras, Ameren - Ameren Services, 3, 11/9/2015

- 0 - 0

Hot Answers

Southern Company, Segment(s) 1, 6, 3, 5, 4/13/2015

- 0 - 0

- 0 - 0

Other Answers

na

John Fontenot, 9/24/2015

- 0 - 0

na

John Fontenot, 9/24/2015

- 0 - 0

na

John Fontenot, 9/24/2015

- 0 - 0

na

John Fontenot, 9/24/2015

- 0 - 0

Jeff Wells, On Behalf of: Grand River Dam Authority, , Segments 1, 3

- 0 - 0

AEP has chosen to vote negative on TOP-010-1, primarily driven by our concerns of a) how granular the Requirements may be regarding the data points themselves and b) the impact of R1.2 on externally provided data.  As previously stated, TOP-10-1 must be written in a reasonable manner that is able to accommodate the high volume of data points which continue to increase.

 

Thomas Foltz, AEP, 5, 11/2/2015

- 0 - 0

Certification requirements are the appropriate place for mandating facilities and capabilities needed to perform reliability functions. These requirements can be enforced in a similar fashion as their reliability standard counterparts without de-certifying an entity if and when requirements are violated. We urge the drafting team, NERC, the Standards Committee and the regulators to think outside of the box and not let taking the right approach be bound by existing document framework.

Leonard Kula, Independent Electricity System Operator, 2, 11/2/2015

- 0 - 0

Tyson Archie, Platte River Power Authority, 5, 11/4/2015

- 0 - 0

none

Joe O'Brien, NiSource - Northern Indiana Public Service Co., 6, 11/4/2015

- 0 - 0

Xcel Energy suggests that the SDT clarifies what qualifies as "independent" in TOP-010-1 R7.  Can this include a separate and independent process within the same EMS system?

Amy Casuscelli, On Behalf of: Xcel Energy, Inc. - MRO, WECC, SPP RE - Segments 1, 3, 5, 6

- 0 - 0

Suggest that the Standard Drafting team clarify that an independent alarm process monitor can be a separate and independent process within the same EMS system (R7). Therefore if an entity has a heartbeat monitor already integrated into its EMS system, the heartbeat monitor can be used.  Independent doesn’t necessarily mean an independent box / system completely separate from the EMS.

MRO-NERC Standards Review Forum (NSRF), Segment(s) 3, 4, 5, 6, 1, 2, 9/9/2015

- 0 - 0

- 4 - 0

This standard may establish an incentive for RC and TOP to limit the data they incorporate into the EMS since each point incorporated increases the compliance risk.

Jonathan Appelbaum, 11/5/2015

- 0 - 0

Darnez Gresham, Berkshire Hathaway Energy - MidAmerican Energy Co., 3, 11/5/2015

- 0 - 0

Angela Gaines, On Behalf of: Portland General Electric Co., WECC, Segments 1, 3, 5, 6

- 0 - 0

Duke Energy requests clarification on the use of the time horizon Same Day Operations throughout the standard. How does the drafting team envision this time horizon corresponding with Real-time monitoring and assessments?

Duke Energy , Segment(s) 1, 5, 6, 4/10/2014

- 0 - 0

FMPA, Segment(s) , 11/9/2015

- 0 - 0

ATC supports the comments submitted by the MRO NSRF as it relates to TOP-010-1.

Suggest that the Standard Drafting team clarify that an independent alarm process monitor can be a separate and independent process within the same EMS system (R7).  Therefore if an entity has a heartbeat monitor already integrated into its EMS system, the heartbeat monitor can be used.  Independent doesn’t necessarily mean an independent box / system completely separate from the EMS.

Andrew Pusztai, 11/9/2015

- 1 - 0

Scott McGough, Georgia System Operations Corporation, 3, 11/9/2015

- 0 - 0

CenterPoint Energy has no additional comments.

John Brockhan, 11/9/2015

- 0 - 0

Oshani Pathirane, 11/9/2015

- 0 - 0

Jared Shakespeare, 11/9/2015

- 0 - 0

PPL NERC Registered Affiliates, Segment(s) 1, 3, 5, 6, 9/11/2015

- 0 - 0

JEA, Segment(s) , 11/9/2015

- 0 - 0

SPP Standards Review Group, Segment(s) 1, 3, 5, 11/9/2015

- 0 - 0

Standards Review Committee (SRC), Segment(s) 2, 11/9/2015

- 0 - 0

Andrea Jessup, On Behalf of: Bonneville Power Administration, WECC, Segments 1, 3, 5, 6

- 0 - 0

Jack Stamper, Clark Public Utilities, 3, 11/9/2015

- 0 - 0

Westar supports the comments provided by the SPP RTO.

Megan Wagner, 11/9/2015

- 0 - 0

Even though IRO-018-1 and TOP-010-1 are applicable to different functional entities, the contents are repetitive.  It would be less cumbersome if one standard could be generated that would be applicable to all the functional entities.

 

Results based standards should focus on the “what” or objective opinions express by some are that the standard is overly prescriptive and could be more suited to a guideline document.

Project 2009-02, Segment(s) 1, 0, 2, 3, 4, 5, 6, 7, 11/9/2015

- 0 - 0

Glenn Pressler, 11/9/2015

- 0 - 0

We question the SDT’s practice of posting the revised SAR along with the draft standard.  It is unclear if the industry is to provide feedback about the removal of “analysis” from the SAR.  This appears to be a substantive change to the project’s scope.

ACES Standards Collaborators - Real-time Project, Segment(s) 1, 4, 5, 11/9/2015

- 0 - 0

Jennifer Losacco, On Behalf of: NextEra Energy - Florida Power and Light Co., FRCC, Segments 1

- 0 - 0

Anthony Jablonski, ReliabilityFirst , 10, 11/9/2015

- 0 - 0

Comments: ERCOT expresses concern that overly prescriptive requirements will hinder – not benefit – the processes and interactions occurring between functional entities currently as well as the continuous improvement of tools and associated capabilities.  If the risk to be addressed is operator awareness of data and analysis quality issues and the taking of prompt action to resolve such issues, ERCOT recommends limited requirements that most directly address these risks.  Overly prescriptive requirements that hinder tool and analyses improvement and the free-flow of functional entity communications that are already occurring do not benefit reliability.  Further, the complicated nature of data exchange, inputs, and analyses require coordination and cooperation amongst many registered entities.  Without a reciprocal obligation by other entities to facilitate responsiveness when an issue arises, the proposed standards and requirements will not achieve their intended objective.  Until such obligation is included in the proposed standard, ERCOT is unable to support its approval.  This reciprocal obligation is critical for achieving the implied objective of the proposed standard because – even where a Reliability Coordinator initiates resolution of issues quickly – lack of responsiveness by the entity that is situated to address an issue will prevent effective, efficient resolution.

Elizabeth Axson, 11/9/2015

- 0 - 0

- 0 - 0

Overall, ITC supports the intent of the standard which is to ensure that quality of Real-time Assessment is adequate to maintain BES reliability. However, the assessment of Real-time data and Real-time Assessment quality is a function better performed offline using larger sets of historical data to identify systematic issues, monitor performance trends of Real-time Assessment, and implement corrective actions. The proposed standard as written can be interpreted that System Operators should monitor and address all data and Real-time Assessment quality issues in Real-time which may be distracting to the Operator. In addition, the term ‘quality’ is very subjective and can lead to different interpretations by different TOPs and regional entities making it difficult to prove and assess compliance.

Meghan Ferguson, 11/9/2015

- 0 - 0

Texas RE recommends reviewing references in the Evidence Retention section of TOP-010-1.  There is reference to R5 and R6 having a rolling 30 day period for evidence.  It would seem that is an incorrect reference as R5 requires implementation of a Process or Procedure.  The 30-day period is short of a timeframe and is not supported by industry practice.  Similar statement for IRO-018-1 except it references R3 and R4.  R3 is a requirement to implement a procedure.  The SDT may have been trying to capture the quality of data indication requirements in each of the Standards.

In TOP-010-1, why is the data retention for a BA different from that for a TOP (relates to the incorrect reference but if the reference is corrected this issue goes away)?

Rachel Coyne, Texas Reliability Entity, Inc., 10, 11/9/2015

- 0 - 0

In our opinion, clarity is needed throughout the proposed standards so that entities will not be confused over how the requirements will be audited.

David Jendras, Ameren - Ameren Services, 3, 11/9/2015

- 0 - 0