Got any questions?

we typically reply in under 30 minutes

Introducing Faasly

The real-time
data platform for
observability

Why did we build FaaSly?

Most monitoring tools do too much

They install an agent or poll data through an API, send them to their time-series database, and then provide you with dashboards and charts to look at. This may seem good at first, but it causes more pain than cure when it comes time to scale up to terabytes per day!

  • Data visibility

    Deciding what to log and what not to log and resorting to compression or sampling to keep the costs low.
  • Latency and throughput

    Centralized logging has done more harm than good. It takes forever to scan the dataset and get the results.
  • Pricing model

    Pricing plans that are stuck in the 90's. Vendors have exploited companies with GBs or rows ingested and retained.

How's Faasly different?

Our approach is simple: ingest the OpenTelemetry data in an S3 bucket as Parquet files in Iceberg table format and query them using DuckDB with milliseond retrieval and zero egress cost.

  • 01. Run fast queries, faster ― State of the art DuckDB OLAP SQL query engine on cold S3 data.
  • 02. Streaming data done right ― Kafka-compatible data streaming built directly on top of S3.
  • 03. Bring your team on the same table ― Apache Iceberg brings a different table design for big data.
  • 04. Fetch query results faster ― Parquet enables targeted searches requiring less I/O and avoid false positives.
  • 05. Think you know scale? Think S3 scale ― 100M reqs/second. 400 Terabits/second. 280T objects.
Lexington thumbnail
Fork in the road

Let's talk about why we went with certain architectural choices

"This will never work". We've heard this a lot of times. We got a ton of advice to build this on top of ClickHouse. In fact, almost all modern day startups in this space use ClickHouse. A couple of issues that were raised in our choice of tech stack:

  • 01. DuckDB is not built for large datasets ― 95% of queries on metrics data are for the last hour, and 98% of the time they are for the last 4 hours. Iceberg's time travel and version travel queries, along with DuckDB's parallel query processing and in-memory representation with Arrow, make it a great choice for handling larger than memory datasets.
    Lexington thumbnail
  • 02. DuckDB is not designed to run on multiple machines ― Two words - scale up and single tenancy. We love safely running our application on a single server and scale up as your data grows. We believe in scaling up rather than scaling out. Like our friends at Motherduck asks, why did we scale out in the first place?
    Lexington thumbnail
Behind the screens

Scaleble, fault-tolerant and efficient data lakehouse

Our approach is opinionated, offering tools and systems for monitoring and security. This means that you do not have to build another observability pipeline, provision a Kafka cluster, or manage Spark jobs. We will take care of chaining everything for you.

Lexington thumbnail
Public Roadmap

Join our community of early adopters and advocates and help us prioritize

  • 238K

    Lines of code
  • 218

    Active Linear issues
  • 88

    Production deployments

Ready to roll with us?

The realtime data platform
for engineering teams

Our mission is to make your work easier.
We make sure you never have to slog through logs again.

l