New post from me on the complexity and importance of metric backfill in mobile observability. Backfill is typically an afterthought in traditional server-centric TSDBs, and it is anything but in the mobile world. Enjoy! (Blog link in the comments.)
How metric backfill boosts mobile observability
More Relevant Posts
-
🌟 New Blog Just Published! 🌟 📌 LLMs Still Can't Replace SREs in Incident Response 🚀 ✍️ Author: Hiren Dave 📖 The modern observability stack-metrics, traces, logs-has become the nervous system of every cloud-native service. As engineers stare at a flood of telemetry, the human cost surfaces: on-call...... 🕒 Published: 2025-09-29 📂 Category: Tech 🔗 Read more: https://lnkd.in/dGiBXc9J 🚀✨ #llms #sre #incidentresponse
To view or add a comment, sign in
-
-
I feel like MCPs are a bit like open APIs in the early 2000s. The Web 2.0 craze triggered many creative mash ups of different APIs to build new and cool things. Things we're used to today but were novel then became possible like pulling real-time earthquake data and putting it on a map. MCPs kind of feel like that but putting the tools into the hands of an LLM makes it accessible to a whole new group of people, some of who won't even realise they're using an MCP. Today Amplitude is launching our #AmplitudeMCP which means your LLM tools like Claude or Cursor can have access to your product behaviour data. The exciting—and nerve-wracking—thing about this is we don't know today what our users will use it for, what they will build! I'm expecting amazing things. https://lnkd.in/gysyqVQt
To view or add a comment, sign in
-
🚀 𝗧𝗵𝗲 𝗣𝗿𝗼𝗺𝗶𝘀𝗲 𝗧𝗿𝗮𝗽 𝗧𝗵𝗮𝘁 𝗘𝘃𝗲𝗻 𝗘𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲𝗱 𝗝𝗦 𝗗𝗲𝘃𝘀 𝗠𝗶𝘀𝘀 - Here’s a small snippet that looks innocent but hides a sneaky bug 👇 What do you think is the output in the given image? Actual Output: Uncaught (in promise) Something went wrong! 💡 Why? Because Promise.reject() doesn’t throw an error synchronously, it returns a rejected Promise. And try...catch only catches synchronous errors, not Promise rejections. So the rejection goes uncaught, and the .then() never runs. ✅ Fix: Make the function async and await the Promise: async function init() { try { await processing(); } catch (err) { console.log("Error in processing."); } } init().then(() => console.log("End")); Output: Error in processing. End Moral of the story: If you’re using Promises, remember --> try...catch only works when you await
To view or add a comment, sign in
-
-
Datadog has made it even easier to monitor all the complexities that come with LLM. What does that mean? With Datadog's LLM-as-a-Judge, you can now create custom LLM-based evaluations in Datadog to measure qualitative performance such as helpfulness, factuality, and tone, on your LLM Observability production traces. > Define evaluation prompts to reflect what “good” means for your application. > Use your own LLM API key to run evaluations with your preferred model provider. > Automate evaluations across production traces in LLM Observability to continuously monitor model quality in real-world conditions. By integrating evaluation directly into observability, you can quantify subjective quality, detect regressions faster, and your LLM apps with data-driven quality signals. Get around it! 🤖
To view or add a comment, sign in
-
-
How effective can QUIC be for dynamic content delivery? In our latest study, Daniel Sedlak dives deep into congestion control algorithms, comparing Cubic, BBRv1, and BBRv2’s real-world performance. The results? Read the analysis here: https://lnkd.in/g9yXJjBr
To view or add a comment, sign in
-
What do RAG users want? Lessons learned from a 1,800+ livestream audience. RAG used to just be for data scientists and engineers, now it’s gone mainstream. With low-code tools like n8n, retrieval is now accessible and configurable to a whole new user base. They’ve discovered the limitations of LLM context windows and want to know more about RAG, like: • When to use RAG • Can RAG be hosted locally • Which are the best vector DBs But loads of users want to go even deeper into RAG, and it’s not a surprise. As use cases and dataset sizes scale up, naive (vanilla) RAG is often not enough. So users also want to know how to optimize and improve the performance of their RAG systems. And they’re asking deeper questions than I expected. The most common questions were about: 🟩 Chunking techniques - Interest in advanced chunking, want to know when to use which technique, optimal chunk sizes ↕️ Re-ranking - Most are just using Cohere’s API, I see huge opportunity for bespoke re-rankers in the future 🖼️ Multimodal RAG - Specifically RAG with diagrams and PDFs (Docling was by far the preferred method for parsing PDFs) 👩🔬 Context engineering - What is the right amount of context to provide in RAG Here’s the link if you want to watch it! 🔗 Optimizing RAG in n8n: https://lnkd.in/g2bZCC84
n8n at SCALE: Practical Strategies for Optimizing RAG
https://www.youtube.com/
To view or add a comment, sign in
-
Back by popular demand, Kentik's Director of Internet Analysis, Doug Madory, revives the spirit of the classic "Baker's Dozen" AS rankings. In this deep dive, Doug goes back 20 years to trace the evolution of the internet's top transit providers and examines what the data reveals. Explore: 📈 A 20-year history of the top ASes, complete with a new interactive visualization 🌐 A technical look at the DFZ, de-peering disputes, and internet partitions that have shaped today's internet 📊 A fascinating traffic breakdown for a modern US provider (transit vs. peering vs. embedded cache See how the internet's backbone has changed and explore two decades of data. 👇
🚨 Back by popular demand — I revisit the old Baker's Dozen blog post series 🍩 that my former colleagues at Renesys used to publish. The analysis would rank the top transit ASes of the internet but this time we're extending it over 20 years, using a nifty interactive visualization. Additionally, I discuss the DFZ, de-peering and partitions before delving into a sample breakdown of traffic volume for a typical mid-sized US provider by connectivity type: transit vs peering vs embedded cache vs IXP. Interesting stuff! https://lnkd.in/eSdFFv78
To view or add a comment, sign in
-
💭 What IF: Context + DFINITY Foundation's ICP → Powering the Self-Writing Internet What if your website wasn’t static... But alive? Able to think, adapt, verify, and respond in real time? If Context Protocol leverages ICP’s powerful infrastructure: → Canisters to host autonomous data logic → Internet Identity to verify real-world users, orgs & agents → HTTP outcalls to bridge offchain & onchain data seamlessly Then we don’t just get better websites — We get living, verifiable, AI-native domains that agents can: → Query for trusted knowledge → Transact through embedded tools → Coordinate across Brains This is how we shift: From “publish and pray” → to “verify and compose” From static web → to self-writing internet And it’s happening now.
To view or add a comment, sign in
-
-
😰 Ran into a nasty bug because I didn't understand HTTP 1 vs. 2 vs. 3. Learn from my pain. When a client and server want to talk to each other, they first negotiate the language they'll use to communicate. That language is HTTP, and it's evolved over the years—hence the versions. • HTTP 1 was fully-text based with carriage returns separating the different parts like the user agent and body. It spins up a new TCP connection for every request/response cycle, so it was pretty inefficient to handshake on every roundtrip. • HTTP 1.1 fixed the inefficiency so that TCP connections could be kept alive for multiple HTTP roundtrips. A little better! • HTTP 2 switched its protocol from text to binary—yay efficiency—and added support for server-side push events—yay streaming. It's all still built on TCP though, which means that one slow packet can hold up all the others (aka the head of line blocking problem) • HTTP 3 switched from TCP to UDP, which solves head of line blocking. It's also built on a Google protocol called QUIC, which conveniently builds in efficient encryption and multiplexing. There's obviously a lot more to HTTP, but that's as much as I needed to know to debug my problem. 😅
To view or add a comment, sign in
-
-
I will be presenting vmagent failure modes at an internal session at One2N tomorrow. It will consist of approaches on how to monitor your metric ingestion pipeline, failure modes like delayed scraping, delayed writes, backpressure and how to develop a mindset in taking these failure modes and understanding them in a distributed systems context. I have always found VictoriaMetrics dashboards and alerts to be the gold standard when sharing this knowledge. Their official Grafana dashboards are clean, have solid summary stats, have well defined sections for different scenarios like - resource usage, troubleshooting, etc. and to top it all their documentation has point to point instructions on tweaking cli flags in context of their impact. Thanks Aliaksandr Valialkin and team for creating such a crisp and clear knowledge base around Victoriametrics. It is a standard worth emulating.
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Plumber
7mohttps://blog.bitdrift.io/post/mobile-metric-backfill