« Go Back to Dashboard

Dashboard Help

The new Subway Public Dashboard is a user-friendly tool that tells our customers how we’re doing by:

  1. Displaying new measures that reflect the performance of our subway system in a way that’s more relevant to customers.
  2. Providing our customers with valuable information – about the entire system, or just about the line, or lines, that they take.
  3. Offering an easy-to-use tool that improves transparency.

Frequently Asked Questions:

The dashboard indicators, except for MDBF and Elevator and Escalator Availability, reflect weekday service only. MDBF and Elevator and Escalator Availability are measured across all days of the week.
When no time periods are selected, data is shown for the whole day (generally 24 hours). Peak hours are from 7 am to 10 am in the morning and 4 pm to 7 pm in the evening. Off-peak hours are the times of day outside of those hours. Exceptions are noted in the indicator calculation details below (e.g., APT/ATT/CJTP are available for trips starting from 6 a.m. to 11 p.m. on weekdays).

Major Incidents are incidents that delay 50 or more trains. Such events cause the most disruption to customers. Major incidents are separated into six categories:

  1. Track - Track fires, broken rails, switch trouble, and other track conditions.
  2. Signals - Signal and track circuit failures, and other equipment and transmission problems related to signals, both for conventional color-light signals and for new technology Communications-Based Train Control (CBTC) signals.
  3. Persons on Trackbed/Police/Medical - Police and/or medical activity due to sick customers, vandalism, assault, persons struck by trains, unauthorized persons on tracks, and suspicious packages.
  4. Stations & Structures - Obstructions and other structural problems, such as damage to tunnels or debris on the right-of-way; electrical problems such as defective wires, cables, and power systems that aren’t on trains, including traction power to run the trains.
  5. Car Equipment - Broken doors, seats, windows, lights, brakes, and other problems caused by defective trains, such as power or air conditioning failures.
  6. Other - Inclement weather, water conditions, external power supply failures, as well as drawbridge openings and other external conditions, such as unstable nearby buildings, nearby fires, civil demonstrations, and/or parades.

Filtering by Peak or Off-Peak shows incidents that began during the selected time period (regardless of how long the incident lasted).

Service Delivered (sometimes referred to as throughput) measures our ability to deliver the scheduled service. Service Delivered is measured along the busiest part of the line, which reflects service across the entire line, and is reported as the percentage of scheduled trains that are provided during peak hours.

Peak hours are as noted above.

Additional Platform Time (APT) is the estimated average extra time that customers spend waiting on the platform for a train, compared with their scheduled wait time. Similarly, Additional Train Time (ATT) is the estimated average extra time that customers spend onboard a train, compared to the time they would have spent onboard a train if trains were running according to schedule. Journey Time is the average time it takes a customer to complete a trip, from the time they start waiting at the origin platform to the time they alight from a train at their destination. Additional Journey Time (AJT) is the average extra time that a customer’s trip takes, compared to their scheduled trip time, equivalent to the sum of APT and ATT, as well as Journey Time minus the average scheduled trip time. Please note that the Journey Time and Additional Journey Time metrics do not appear in the charts on the dashboard, these metrics only appear in the state-mandated metrics Excel file. All estimates are for each individual train a customer uses in their journey, not all trains in their journey combined---in transit terminology, each “unlinked trip”. Put another way, each “leg” of a subway journey involving transfers is counted separately in calculating these metrics. This allows these metrics to be calculated for each individual line, as well as the whole system. These metrics are reported for trips starting from 6 a.m. to 11 p.m., weekdays.

APT, ATT, AJT, and Journey Time are calculated using a process which takes as input ridership data from MetroCard swipes, OMNY taps, train schedules, and actual train movement data, and then estimates the actual and scheduled journey time for every weekday journey “leg”, or unlinked trip, on the subway system. This process has four main parts:

  1. Generate a sample of millions of unlinked trips, which represent the ridership patterns for a typical recent weekday. Historically, this sample has typically been updated monthly, but due to dramatic ridership changes during the COVID-19 pandemic, updates have been made more frequently. This process takes as input MetroCard/OMNY data for a representative weekday, and the typical subway service patterns for that weekday. This part begins with inferring customers’ journey destinations from patterns in MetroCard/OMNY data. Next, their journeys are split into unlinked trips based on what should be best route given their origin and destination. This part of the process is very computationally intensive. For this reason, it is only feasible to run it for a small number of sample days, rather than every weekday.
  2. Every weekday, compile the schedule data (which includes service changes due to planned work) and actual train movement data for that day, into a pair of timetables. The actual train movement data comes from several data sources, including track circuit data and the systems that power the subway countdown clocks and real-time app information.
  3. Every weekday, assign the sample unlinked trips to a scheduled train and actual train. In this assignment, customers “choose” whatever train they expect will complete the trip the fastest on a given day, regardless of line/service and local/express. In making this choice, they are restricted to the particular “corridor” (e.g., 1/2/3 in Manhattan, A/C in Brooklyn, B/D in Bronx) they were assigned to in Part 1. For example, if an unlinked trip is from Union Square to Grand Central, they can take the 4, 5, or 6 trains, but not the NQRW to Times Square and then the 7. Restricting the choices this way makes it computationally feasible to run daily.
  4. For every unlinked trip, calculate the scheduled and actual waiting time and on-train time based on the assigned trains. From this information, calculate the APT, ATT, AJT, and Journey Time for each unlinked trip, and aggregate to produce line-level metrics for each month.

This process relies on several key assumptions, to overcome data availability and computational feasibility issues:

  • A passenger’s next MetroCard swipe (or OMNY tap) location generally predicts their destination
  • Weekday (non-holiday) ridership, i.e. the set of journeys passengers want to take, does not vary substantially day to day within a month (or during shorter periods when the sample trips are updated more frequently)
  • Customers use the same “corridors” for a given journey day to day, choosing a consistent “path” based on what they consider typical service (though they will change between trains on that corridor, e.g., 4/5/6)

These simplifying assumptions are, of course, not strictly correct: a person’s next swipe doesn’t always predict their destination, subway ridership does vary somewhat day-to-day, and people do sometimes vary their path for the same journey day to day. However, they serve as valuable approximations of weekday passenger behavior which make the process’s estimates of these metrics close to their “actual” values, at an aggregate level. These assumptions do not hold as well during late nights and weekends, largely due to frequent service changes caused by track work, which is primarily why these metrics are reported only for weekdays and for trips starting between 6 a.m. and 11 p.m. In the era of the COVID-19 pandemic, several of these assumptions are clearly less valid than when the metrics were launched. This, along with other data issues, prompted some days to be excluded from reporting, as well as changes to the sample trip generation process to use more up to date sample days.

At the same time, the rollout of OMNY has improved data quality. While MetroCard data only indicates the 6-minute window a swipe took place within, OMNY provides a precise timestamp. This requires fewer assumptions, as there is no need to distribute swipes across the 6-minute periods used by MetroCards.

As data availability changes (e.g., rollout of new data sources for train locations, continued rollout of OMNY), updates will be made to the way these metrics are generated. Similarly, the methodology may be refined. If warranted, this may prompt revision of historical data to provide a comparable time series. Changes will also likely impact related metrics such as Customer Journey Time Performance, Journey Time, and Additional Journey Time.

Customer Journey Time Performance (CJTP) estimates the percentage of customer trips with a total travel time within 5 minutes of the scheduled time. It is equivalent to the percentage of customer trips with Additional Platform Time (APT) + Additional Train Time (ATT) less than 5 minutes. It is calculated using the same process as is used to calculate APT and ATT. See the description of those metrics for how this process works, as well as relevant assumptions. Like APT and ATT, CJTP is estimated for each individual train a customer uses in their journey, not all trains in their journey combined; in transit terminology, the “unlinked trip” level. It is reported for weekdays for trips starting from 6 a.m. to 11 p.m.

Mean Distance Between Failures (MDBF) reports how frequently car-related problems such as door failures, loss of motor power, or brake issues cause a delay of over five minutes. It is calculated by dividing the number of miles train cars run in service by the number of incidents due to car‐related problems.

MDBF numbers include weekdays and weekends. Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.

Car Class Lines
R62
R62A
R142
R142A
R188
R32
R42
R46
R68
R68A
R143
R160
R179

Subway Car Passenger Environment Survey Key Performance Indicators (PES-KPIs) group the results of surveys of subway car condition into three categories:

  • Appearance – Do the trains appear clean? Are they free of graffiti?
  • Equipment – Do the door panels, lighting, heat and air-conditioning work?
  • Information – Is the information helpful and appropriate? Are there maps, proper signage? Are the conductor’s announcements clear?

Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.

Station Passenger Environment Survey Performance Indicators (PES-KPI) group the results of surveys of station condition into three categories:

  • Appearance – Is the station clean and free of graffiti?
  • Equipment – Are MetroCard vending machines, turnstiles and station attendant booths in working order?
  • Information – What service information is available to our customers to help ease their commute? Are there maps easily visible and in good condition? Are Transit employees available, in proper uniform and able to provide customer assistance? Is the signage clear and up-to-date?

Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.

Elevator and Escalator availability is defined as the amount of time that elevators or escalators are operational systemwide. Availability for a given time period is measured by determining the percentage of that time period a unit is available for customer use. All service outages, regardless of cause, count as downtime in availability calculations, except for units out of service for capital rehabilitation (which are excluded).

Most elevators and escalators in the subway are maintained by NYCT and are electronically monitored 24-hours a day. Some elevators and escalators in the subway are owned and maintained by third parties; these are inspected by NYCT personnel every 8 hours. The dashboard numbers include weekdays and weekends.

Wait Assessment (WA) measures how regularly the trains are spaced during peak hours at selected timepoints on each line. To meet the standard, the headway (time between trains) can be no greater than 25% more than the scheduled headway. This provides a percentage of trains passing the standard, but does not account for extra service operated, is not weighted to how many customers are waiting for the trains at different stations, does not distinguish between relatively minor gaps in service and major delays, and is not a true measurement of time customers spend waiting on the platform.

Terminal On-Time Performance (TOTP) measures the percentage of trains arriving at their destination terminals as scheduled. A train is counted as on-time if it arrives at its destination early, on time, or no more than five minutes late, and has not skipped any planned stops. TOTP is a legacy metric that provides a measure of trains arriving within the standard, and not a direct measure of customer travel time, particularly since relatively few customers travel all the way to the end of a line.

The new measures contained in this dashboard – APT, ATT, and CJTP – have been made possible systemwide by the MTA's investments in new train tracking technology and more robust methods for determining how customers use the subway system. For the B Division, this technology became sufficient in March 2017. As a result, systemwide data prior to March 2017 only includes data for the A Division lines.
The W service began in November 2016. As of April 2018, TOTP for the W is reported with the N line.
Delays are calculated based on actual service as compared with schedules loaded into NYC Transit’s electronic systems. The rapid implementation of the Essential Service Plan in response to the Covid-19 outbreak meant that those electronic systems did not properly reflect the service or schedules that were operated between March 22, 2020 and April 13, 2020. To maintain comparisons with historical data, the totals for the first 15 weekdays in March were factored up using a daily average to the projected total for the 22 weekdays. In April, the daily average from the last 13 weekdays was factored up to the projected total for 22 weekdays.
Starting in January 2021, NYCT implemented data processing changes to improve the accuracy of the APT, ATT and CJTP metrics. To maintain the ability to compare past performance with current performance, we have rerun the full history of these metrics with the improved process; therefore, current performance results may differ from previously reported values. The specifics of these improvements, and their effects on reported performance, are described in more detail in this this document.