« Go Back to Dashboard

Dashboard Help

The NYCT Performance Dashboard is a user-friendly tool that tells our customers how we’re doing by:

  1. Displaying new measures that reflect the performance of our transit system in a way that’s more relevant to customers.
  2. Providing our customers with valuable information – about the entire system, or just about the line, or lines, that they take.
  3. Offering an easy-to-use tool that improves transparency.
The dashboard indicators, except for MDBF and Elevator and Escalator Availability, reflect weekday service only. MDBF and Elevator and Escalator Availability are measured across all days of the week.
When no time periods are selected, data is shown for the whole day (generally 24 hours). Peak hours are from 7 am to 10 am in the morning and 4 pm to 7 pm in the evening. Off-peak hours are the times of day outside of those hours. Exceptions are noted in the indicator calculation details below (e.g., APT/ATT/CJTP are available for trips starting from 6 a.m. to 11 p.m. on weekdays).

Major Incidents are incidents that delay 50 or more trains. Such events cause the most disruption to customers. Major incidents are separated into six categories:

  1. Track - Track fires, broken rails, switch trouble, and other track conditions.
  2. Signals - Signal and track circuit failures, and other equipment and transmission problems related to signals, both for conventional color-light signals and for new technology Communications-Based Train Control (CBTC) signals.
  3. Persons on Trackbed/Police/Medical - Police and/or medical activity due to sick customers, vandalism, assault, persons struck by trains, unauthorized persons on tracks, and suspicious packages.
  4. Stations & Structures - Obstructions and other structural problems, such as damage to tunnels or debris on the right-of-way; electrical problems such as defective wires, cables, and power systems that aren’t on trains, including traction power to run the trains.
  5. Car Equipment - Broken doors, seats, windows, lights, brakes, and other problems caused by defective trains, such as power or air conditioning failures.
  6. Other - Inclement weather, water conditions, external power supply failures, as well as drawbridge openings and other external conditions, such as unstable nearby buildings, nearby fires, civil demonstrations, and/or parades.

Filtering by Peak or Off-Peak shows incidents that began during the selected time period (regardless of how long the incident lasted).

Service Delivered (sometimes referred to as throughput) measures our ability to deliver the scheduled service. Service Delivered is measured along the busiest part of the line, which reflects service across the entire line, and is reported as the percentage of scheduled trains that are provided during peak hours. Peak hours are as noted above.

Additional Platform Time (APT) is the estimated average extra time that customers spend waiting on the platform for a train, compared with their scheduled wait time. Similarly, Additional Train Time (ATT) is the estimated average extra time that customers spend onboard a train, compared to the time they would have spent onboard a train if trains were running according to schedule. Journey Time is the average time it takes a customer to complete a trip, from the time they start waiting at the origin platform to the time they alight from a train at their destination. Additional Journey Time (AJT) is the average extra time that a customer’s trip takes, compared to their scheduled trip time, equivalent to the sum of APT and ATT, as well as Journey Time minus the average scheduled trip time. Please note that the Journey Time and Additional Journey Time metrics do not appear in the charts on the dashboard, these metrics only appear in the state-mandated metrics Excel file. All estimates are for each individual train a customer uses in their journey, not all trains in their journey combined---in transit terminology, each “unlinked trip”. Put another way, each “leg” of a subway journey involving transfers is counted separately in calculating these metrics. This allows these metrics to be calculated for each individual line, as well as the whole system. These metrics are reported for trips starting from 6 a.m. to 11 p.m., weekdays.

APT, ATT, AJT, and Journey Time are calculated using a process which takes as input ridership data from MetroCard swipes, OMNY taps, train schedules, and actual train movement data, and then estimates the actual and scheduled journey time for every weekday journey “leg”, or unlinked trip, on the subway system. This process has four main parts:

  1. Generate a sample of millions of unlinked trips, which represent the ridership patterns for a typical recent weekday. Historically, this sample has typically been updated monthly, but due to dramatic ridership changes during the COVID-19 pandemic, updates have been made more frequently. This process takes as input MetroCard/OMNY data for a representative weekday, and the typical subway service patterns for that weekday. This part begins with inferring customers’ journey destinations from patterns in MetroCard/OMNY data. Next, their journeys are split into unlinked trips based on what should be best route given their origin and destination. This part of the process is very computationally intensive. For this reason, it is only feasible to run it for a small number of sample days, rather than every weekday.
  2. Every weekday, compile the schedule data (which includes service changes due to planned work) and actual train movement data for that day, into a pair of timetables. The actual train movement data comes from several data sources, including track circuit data and the systems that power the subway countdown clocks and real-time app information.
  3. Every weekday, assign the sample unlinked trips to a scheduled train and actual train. In this assignment, customers “choose” whatever train they expect will complete the trip the fastest on a given day, regardless of line/service and local/express. In making this choice, they are restricted to the particular “corridor” (e.g., 1/2/3 in Manhattan, A/C in Brooklyn, B/D in Bronx) they were assigned to in Part 1. For example, if an unlinked trip is from Union Square to Grand Central, they can take the 4, 5, or 6 trains, but not the NQRW to Times Square and then the 7. Restricting the choices this way makes it computationally feasible to run daily.
  4. For every unlinked trip, calculate the scheduled and actual waiting time and on-train time based on the assigned trains. From this information, calculate the APT, ATT, AJT, and Journey Time for each unlinked trip, and aggregate to produce line-level metrics for each month.

This process relies on several key assumptions, to overcome data availability and computational feasibility issues:

  • A passenger’s next MetroCard swipe (or OMNY tap) location generally predicts their destination
  • Weekday (non-holiday) ridership, i.e. the set of journeys passengers want to take, does not vary substantially day to day within a month (or during shorter periods when the sample trips are updated more frequently)
  • Customers use the same “corridors” for a given journey day to day, choosing a consistent “path” based on what they consider typical service (though they will change between trains on that corridor, e.g., 4/5/6)

These simplifying assumptions are, of course, not strictly correct: a person’s next swipe doesn’t always predict their destination, subway ridership does vary somewhat day-to-day, and people do sometimes vary their path for the same journey day to day. However, they serve as valuable approximations of weekday passenger behavior which make the process’s estimates of these metrics close to their “actual” values, at an aggregate level. These assumptions do not hold as well during late nights and weekends, largely due to frequent service changes caused by track work, which is primarily why these metrics are reported only for weekdays and for trips starting between 6 a.m. and 11 p.m. In the era of the COVID-19 pandemic, several of these assumptions are clearly less valid than when the metrics were launched. This, along with other data issues, prompted some days to be excluded from reporting, as well as changes to the sample trip generation process to use more up to date sample days.

At the same time, the rollout of OMNY has improved data quality. While MetroCard data only indicates the 6-minute window a swipe took place within, OMNY provides a precise timestamp. This requires fewer assumptions, as there is no need to distribute swipes across the 6-minute periods used by MetroCards.

As data availability changes (e.g., rollout of new data sources for train locations, continued rollout of OMNY), updates will be made to the way these metrics are generated. Similarly, the methodology may be refined. If warranted, this may prompt revision of historical data to provide a comparable time series. Changes will also likely impact related metrics such as Customer Journey Time Performance, Journey Time, and Additional Journey Time.

Customer Journey Time Performance (CJTP) estimates the percentage of customer trips with a total travel time within 5 minutes of the scheduled time. It is equivalent to the percentage of customer trips with Additional Platform Time (APT) + Additional Train Time (ATT) less than 5 minutes. It is calculated using the same process as is used to calculate APT and ATT. See the description of those metrics for how this process works, as well as relevant assumptions. Like APT and ATT, CJTP is estimated for each individual train a customer uses in their journey, not all trains in their journey combined; in transit terminology, the “unlinked trip” level. It is reported for weekdays for trips starting from 6 a.m. to 11 p.m.

Mean Distance Between Failures (MDBF) reports how frequently car-related problems such as door failures, loss of motor power, or brake issues cause a delay of over five minutes. It is calculated by dividing the number of miles train cars run in service by the number of incidents due to car‐related problems.

MDBF numbers include weekdays and weekends. Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.

Car Class Lines
R62
R62A
R142
R142A
R188
R32
R42
R46
R68
R68A
R143
R160
R179

Subway Car Passenger Environment Survey Key Performance Indicators (PES-KPIs) group the results of surveys of subway car condition into three categories:

  • Appearance – Do the trains appear clean? Are they free of graffiti?
  • Equipment – Do the door panels, lighting, heat and air-conditioning work?
  • Information – Is the information helpful and appropriate? Are there maps, proper signage? Are the conductor’s announcements clear?

Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.

Station Passenger Environment Survey Performance Indicators (PES-KPI) group the results of surveys of station condition into three categories:

  • Appearance – Is the station clean and free of graffiti?
  • Equipment – Are MetroCard vending machines, turnstiles and station attendant booths in working order?
  • Information – What service information is available to our customers to help ease their commute? Are there maps easily visible and in good condition? Are Transit employees available, in proper uniform and able to provide customer assistance? Is the signage clear and up-to-date?

Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.

Elevator and Escalator availability is defined as the amount of time that elevators or escalators are operational systemwide. Availability for a given time period is measured by determining the percentage of that time period a unit is available for customer use. All service outages, regardless of cause, count as downtime in availability calculations, except for units out of service for capital rehabilitation (which are excluded).

Most elevators and escalators in the subway are maintained by NYCT and are electronically monitored 24-hours a day. Some elevators and escalators in the subway are owned and maintained by third parties; these are inspected by NYCT personnel every 8 hours. The dashboard numbers include weekdays and weekends.

Wait Assessment (WA) measures how regularly the trains are spaced during peak hours at selected timepoints on each line. To meet the standard, the headway (time between trains) can be no greater than 25% more than the scheduled headway. This provides a percentage of trains passing the standard, but does not account for extra service operated, is not weighted to how many customers are waiting for the trains at different stations, does not distinguish between relatively minor gaps in service and major delays, and is not a true measurement of time customers spend waiting on the platform.

Terminal On-Time Performance (TOTP) measures the percentage of trains arriving at their destination terminals as scheduled. A train is counted as on-time if it arrives at its destination early, on time, or no more than five minutes late, and has not skipped any planned stops. TOTP is a legacy metric that provides a measure of trains arriving within the standard, and not a direct measure of customer travel time, particularly since relatively few customers travel all the way to the end of a line.

The new measures contained in this dashboard – APT, ATT, and CJTP – have been made possible systemwide by the MTA's investments in new train tracking technology and more robust methods for determining how customers use the subway system. For the B Division, this technology became sufficient in March 2017. As a result, systemwide data prior to March 2017 only includes data for the A Division lines.
The W service began in November 2016. As of April 2018, TOTP for the W is reported with the N line.
Delays are calculated based on actual service as compared with schedules loaded into NYC Transit’s electronic systems. The rapid implementation of the Essential Service Plan in response to the Covid-19 outbreak meant that those electronic systems did not properly reflect the service or schedules that were operated between March 22, 2020 and April 13, 2020. To maintain comparisons with historical data, the totals for the first 15 weekdays in March were factored up using a daily average to the projected total for the 22 weekdays. In April, the daily average from the last 13 weekdays was factored up to the projected total for 22 weekdays.
Starting in January 2021, NYCT implemented data processing changes to improve the accuracy of the APT, ATT and CJTP metrics. To maintain the ability to compare past performance with current performance, we have rerun the full history of these metrics with the improved process; therefore, current performance results may differ from previously reported values. The specifics of these improvements, and their effects on reported performance, are described in more detail in this this document.
The dashboard indicators, except for MDBF, reflect weekday service only. MDBF is measured across all days of the week.
When no time periods are selected, data is shown for the whole day (24 hours). Peak hours are from 7 am to 9 am in the morning and 4 pm to 7 pm in the evening. Off-peak hours are the times of day outside of those hours. Exceptions are noted below.
Each route is assigned to a single borough based on the letters used for the route number. For example, data for the M60-SBS is included in the total when Manhattan is selected, but not if only Queens is selected. Similarly, the Q54 is included in Queens, and the S79-SBS is included in Staten Island. Express routes are counted in their outer borough, so the BXM7 is a Bronx route and the X27 is Brooklyn.

Service Delivered (sometimes referred to as throughput) measures our ability to deliver the scheduled service. It is calculated as the percentage of scheduled bus trips that are actually provided during peak hours. Service Delivered is measured at the peak load point, which is the stop on the route where the bus is most crowded, using GPS tracking data from Bus Time, as well as bus depot operations records.

Bus speeds measure how quickly buses travel along their routes. Speeds are calculated as the average end-to-end speed along a route using Bus Time data.

Additional Bus Stop Time (ABST) is the average added time that customers wait at a stop for a bus, compared with their scheduled wait time. The measure assumes customers arrive at the bus stop uniformly, except for routes with longer headways, where customers arrive more closely aligned to the schedule. ABST (sometimes referred to as Excess Wait Time) is a new indicator for the MTA, but is considered an industry best practice worldwide. ABST is measured using customers’ MetroCard swipes on buses combined with GPS tracking data from Bus Time. This indicator is likely to be refined and enhanced over time as the MTA gains experience integrating the latest technology. ABST is measured from 4 am to 11 pm.

Additional Travel Time (ATT) is the average additional time customers spend onboard the bus compared to the schedule. ATT (sometimes referred to as Excess In-Vehicle Travel Time) is a new indicator for the MTA, but is considered an industry best practice worldwide. ATT is measured using customers’ MetroCard swipes on buses combined with GPS tracking data from Bus Time. This indicator is likely to be refined and enhanced over time as the MTA gains experience integrating the latest technology. ATT is measured from 4 am to 11 pm.

Customer Journey Time Performance (CJTP) measures the percentage of customers who complete their journey (ABST + ATT) within 5 minutes of the scheduled time. This is a new indicator for the MTA, but is used by other transit agencies to measure service. CJTP is measured using customers’ MetroCard swipes on buses combined with GPS tracking data from Bus Time. This indicator is likely to be refined and enhanced over time as the MTA gains experience integrating the latest technology. CJTP is measured from 4 am to 11 pm.

Mean Distance Between Failures (MDBF) reports how frequently mechanical problems such as engine failures or electrical malfunctions cause delays. It is calculated by dividing the number of miles buses run in service by the number of incidents due to mechanical problems.

MDBF numbers include weekdays and weekends. This number is reported as a 12-month average.

Passenger Environment Survey (PES) indicators combine the results of surveys of a number of different aspects of bus vehicle and operating conditions in three categories:

  • Appearance: For example, do the buses appear clean? Are they free of graffiti?
  • Equipment: For example, do the the heat, air conditioning, and wheelchair lift work?
  • Information: For example, is the information helpful and appropriate? Are the electronic signs correct? Are the announcements clear?

Separate surveys are conducted for local and express buses. Express buses are only surveyed for appearance and equipment indicators.

Surveys are conducted between 4 am and 11 pm on weekdays. This number is reported as a 12-month average.

Wait Assessment (WA) measures how evenly buses are spaced. It is defined as the percentage of actual intervals between buses that are no more than three minutes over the scheduled interval for the morning (7 am to 9 am) and afternoon (4 pm to 7 pm) peak periods and no more than five minutes over the scheduled interval for the rest of the day. This measure provides a percentage of buses passing the standard, but it does not account for extra service operated, it is not weighted based on how many customers are waiting for buses at different stops, it does not distinguish between relatively minor gaps in service and major delays, and it is not a true measurement of time customers spend waiting at stops.
The customer indicators contained in this dashboard (Additional Bus Stop Time, Additional Bus Time, and Customer Journey Time Performance) are new indicators for the MTA. Accordingly, data is not yet available prior to August 2017.
Select Bus Service equipment is sometimes used on Local/Limited routes. Prior to June 2017, data for Select Bus Service is combined with Local/Limited service. Due to better tracking mechanisms, MDBF data has been captured separately since June 2017.
Due to the survey methodology for the Passenger Environment Surveys, data is collected for SBS routes and reported under the Local/Limited service type.

This indicator reflects the percentage of time during each month that the average platform at accessible stations are available to passengers who need step-free access. It is measured at the individual platform level for each station based on whether the platform can be reached by one or more working elevator(s) or by ramps. If multiple elevators are required to travel from the street to the platform, that platform is only considered available when all elevators on the accessible pathway are in service, or if an alternate pathway to the platform is available. Elevator downtime for an unplanned outage or planned routine maintenance is included in this indicator, while planned capital improvements are excluded from this indicator. This indicator includes both MTA-owned and third-party elevators. Planned capital improvements may not be excluded from this indicator for third-party elevators.

This is a new indicator for the MTA, but was informed by researching best practices at other transit agencies worldwide. This indicator is likely to be refined and enhanced over time as the MTA makes improvements to our tools and technology.

This indicator shows the number of subway journeys made each month by passengers using Reduced-Fare or Access-A-Ride MetroCards, categorized by the type of MetroCard used:

  • Reduced-Fare MetroCard for Seniors: Customers who have a Reduced-Fare MetroCard based on Senior Citizen status (ages 65 and up)
  • Reduced-Fare MetroCard for Customers with Disabilities: Customers who have a Reduced-Fare MetroCard based on a qualifying disability
  • AutoGate: Customers who have a Reduced-Fare MetroCard (senior or disabled) that allows them to use the AutoGate, which opens the automatic entry/exit gates located at select subway stations.
  • Access-A-Ride: Customers who are eligible for the Paratransit Access-A-Ride program

To learn more about this program and to apply, visit the: Reduced-Fare MetroCard page.

This indicator shows the number of bus journeys made on buses each month by passengers using Reduced-Fare or Access-A-Ride MetroCards, categorized by the type of MetroCard used:

  • Reduced-Fare MetroCard for Seniors: Customers who have a Reduced-Fare MetroCard based on Senior Citizen status (ages 65 and up)
  • Reduced-Fare MetroCard for Customers with Disabilities: Customers who have a Reduced-Fare MetroCard based on a qualifying disability
  • AutoGate: Customers who have a Reduced-Fare MetroCard (senior or disabled) that allows them to use the AutoGate, which opens the automatic entry/exit gates located at select subway stations.
  • Access-A-Ride: Customers who are eligible for the Paratransit Access-A-Ride program

To learn more about this program and to apply, visit the: Reduced-Fare MetroCard page.

This indicator shows the number of times the bus operator deploys the wheelchair ramp or lift to assist a passenger to board or exit a bus. This data is manually recorded by bus operators. Any passenger who requests it may use a bus ramp or lift. This indicator is likely to be refined and enhanced over time as automatically collected data sources become more widely available.

This indicator includes all elevators and ramps that are part of an accessible pathway to subway service, including elevators that are managed and maintained by a party other than the MTA. These third parties are primarily real estate developers, but also include other public agencies. This indicator relies on manual observations by MTA staff and outage reports that third-party owners provide to the MTA. Third party outages may not exclude planned capital improvements.

There are a small number of journeys taken on non-standard bus routes such as shuttles that replace subway service during planned work. These journeys are counted toward the systemwide totals, but are not assigned to any specific borough as they do not have an official route designation. Selecting all boroughs shows the total for the five boroughs excluding these journeys, while the default view (no boroughs selected) shows the systemwide total including these journeys. These journeys are labeled with a borough of "Special Routes" in the export file.

Each route is assigned to a single borough based on the letters used for the route number. For example, data for the M60-SBS is included in the total when Manhattan is selected, but not if only Queens is selected. Similarly, the Q54 is included in Queens, and the S79-SBS is included in Staten Island. Express routes are counted in their outer borough, so the BXM7 is a Bronx route and the X27 is Brooklyn. In the case of the Bus Wheelchair Ramp/Lift Usage, data is collected at the route level only and not at the individual stop. All deployments are assigned to a borough based on the route number, and not the borough where the ramp or lift was actually deployed.

In the case of the Bus Wheelchair Ramp/Lift Usage, data is collected at the route level only and not at the individual stop. All deployments are assigned to a borough based on the route number, and not the borough where the ramp or lift was actually deployed.

Bus Wheelchair Ramp/Lift Usage data is typically only available on a two-month lag. This lag is necessary to ensure all data collected is entered and validated for accuracy and completeness. It may be longer under special circumstances.

The dashboard indicators are measured across all days of the week.

By default, data is shown for the whole day (24 hours).

Reduced fares are available for MTA subway, bus, and rail customers who are 65 or older or who have qualifying disabilities. To learn more about this program and to apply, visit the Reduced-Fare MetroCard page.