
Iterable Status
Real-time updates of Iterable issues and outages
Iterable status is Operational
Iterable Global Proof Sends
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Updates
Active Incidents
No active incidents
Recently Resolved Incidents
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
We have been alerted to an issue where proofs sends are not being sent as expected as of 9:00 am PT. Iterable Engineers are actively investigating the issue. Next update by 11:00 am PT or sooner.
Iterable Outage Survival Guide
Iterable Components
Iterable Global Web Application
Iterable Global API Success
Iterable Global Links
Iterable Global System Webhooks
Iterable Global Partner Webhooks
Iterable Global Analytics Processing
Iterable Global Catalog
Iterable Global API Ingestion
Iterable Global Campaign Sends
Iterable Global Proof Sends
We have been alerted to an issue where proofs sends are not being sent as expected as of 9:00 am PT. Iterable Engineers are actively investigating the issue. Next update by 11:00 am PT or sooner.
Iterable Global Journey Processing
Iterable Cluster 5
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 6
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 8
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 9
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 10
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 11
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 12
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 13
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 14
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 15
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 16
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 17
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 18
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 19
Emails Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 20
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 21
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 22
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 23
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Uploads
User Deletions
Iterable Cluster 24
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Updates
User Deletions
Iterable Cluster 25
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Updates
User Deletions
Iterable Cluster 100
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Updates
User Deletions
Iterable Cluster 101
Email Sends
Journey Processing
Push Sends
SMS Sends
User Updates
List Updates
User Deletions
Iterable Cluster 102
Email Sends
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
Journey Processing
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
Push Sends
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
SMS Sends
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
User Updates
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
List Updates
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
User Deletions
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.