Flurry Analytics’ cover photo
Flurry Analytics

Flurry Analytics

Advertising Services

San Francisco, CA 7,021 followers

Trust the most adopted mobile app analytics solution.

About us

Flurry Analytics is the world’s first mobile app analytics provider, with over 1 million active apps on the platform -- from start-ups to the Fortune 500. The solution is comprehensive, completely free, takes only five minutes to integrate and features an easy-to-use dashboard that anyone in your company can use. Flurry provides useful, powerful insights out-of-the-box including real-time metrics. Explore usage, engagement, retention, geographic, demographic, audience and technographic metrics, and more. For industry-leading insights, only Flurry allows apps to track 500 events along with unlimited parameters. For advanced users, Flurry includes custom querying for fast, on-demand data exploration, segmentation, user paths and funnel analysis. The Flurry platform also includes robust Crash Reporting, Push Notification management and Remote Configuration.

Website
https://www.flurry.com
Industry
Advertising Services
Company size
10,001+ employees
Headquarters
San Francisco, CA
Type
Public Company
Founded
2005
Specialties
Mobile App Advertsing, Mobile App Monetization, Mobile App Analytics, and Mobile App Engagement

Products

Locations

Employees at Flurry Analytics

Updates

  • Flurry Analytics reposted this

    View organization page for RunLLM

    1,867 followers

    🎥 Link to full mini-documentary in the first comment 👇👇 We sat down with Vikram Sreekanti, Co-founder & CEO of RunLLM, UC Berkeley College of Engineering Ph.D. graduate, and former researcher at the Berkeley RISE Lab, where he worked on cloud infrastructure, serverless systems, and data platforms with Professors Joe Hellerstein and Joseph Gonzalez — before co-founding RunLLM with them and Chenggang Wu. Vikram's take? AI by itself isn’t a silver bullet. You can’t just throw a model at a problem and expect results. The real challenge is building a thoughtful application of that technology, and that’s actually the hardest part. Hear from Vikram and other experts on what it takes to assess and adopt AI in a way that actually works for companies. #AI #LLM #EnterpriseAI #TrustworthyAI #RunLLM #BerkeleyRISE #SupportEngineering #MachineLearning #AIProductDesign #VikramSreekanti

  • Flurry Analytics reposted this

    View profile for Peter Farago

    Marketing & Growth Leader

    I've spent about a decade deeply understanding Apple's approach to product design, usability and all the things since spending time at Flurry Analytics, which saw app usage **every day** on 2 billion smartphones, most of them iPhones. I had unusual access, and it was pretty cool if I'm being honest. Anyway, their approach to layering in AI is very "Apple" which means they have to consider myriad use cases on a massive existing multi-product surface area and they will not rush into it. Remember, Apple is a hardware company first which means they "measure twice and cut once" (maybe even three times). In complex production with massive supply chains, you're working multiple years out, so little problems become existential problems. For people who do "run-and-gun" fast cycle iterative software work, it's very, very different. So it doesn't surprise me that they're first looking for the intersections where AI can add outsized, obvious value and layering those experiences, first, in a slow and measured way. At the same time, a hardware-first company, they are not going to have the leading AI lab building the most cutting edge models. In the early days of AI, they will need that trust, however, as users are experiencing what LLMs can do for the first time through Apple products. And that experience needs to deliver.

    View organization page for RunLLM

    1,867 followers

    🍏 Ep 37: Why Didn’t Apple Ship More AI at WWDC? In today's LLMs on the Run, UC Berkeley College of Engineering Professor and RunLLM Co‑founder Joseph Gonzalez offers measured praise for Apple’s thoughtful, privacy-first approach to AI, but notes the models themselves still have room to improve. Joey says Apple is dialing in subtle, useful AI integrations like Live Translation, Visual Intelligence, Workout Buddy, and the Foundation Models framework—all with a light touch to preserve user experience. But he also hints that Apple needs stronger underlying models if it’s to truly match leaders like OpenAI and Google. 👀 Joey’s Take: ✅ Apple is starting with value—integrating AI where it naturally fits ✅ Privacy and UX matter—and Apple leads on those fronts ✅ But to move from good to great, they’ll need more powerful models 📣 Speaking of stepping up: the open offer to Apple (and the AI at Apple team) still stands: If you repost this episode, we’ll give you a $330 RunLLM rebate. We're a startup but we do love Apple, and our whole engineering team uses Apple laptops to make our AI magic! And truly: use it however fits—maybe donate it to an AI intern happy hour. 😉 Brought to you by RunLLM: https://lnkd.in/gMJsndXJ #LLMsOnTheRun #AppleIntelligence #PrivacyFirstAI #UXDesign #AIModels #RunLLM #GenAI #JoeyGonzalez

  • Flurry Analytics reposted this

    View profile for Vikram Sreekanti

    Co-founder & CEO, RunLLM

    Your “AI strategy” is a complete waste of time. It’s tempting to take a big picture view of the world and try to come up with the Right™ way to use AI in your organization. It’s what companies did with mobile, cloud, and so on. A quick search for “HBR AI strategy” yields articles and podcasts with vague advice like “build new things” and “start with the problem, not the technology.” Unfortunately, strategies with these kinds of principles are going to get you nowhere. Things are changing so fast in AI that your strategy is going to be obsolete by the time you finish writing it. A year ago, who would have predicted the community would invent a new AI benchmark (ARC AGI) and then blow previous scores out of the water with o3 before the year was out? Instead, stick to some simple principles: 1. Let the experts decide. No one will know better than your head engineering whether an AI engineer will be good for your team. The same is true for support, marketing, sales, and so on. 2. Know how to evaluate: It doesn’t need to be empirical, but don’t adopt AI for the sake of it. Have clear goals. 3. Encourage using AI: If you put up roadblocks, you’re just going to fall behind. 4. Accept failures: The market’s early, and not every tool is going to be perfect. Try lots of things and don’t be scared to say something didn’t work. Joseph Gonzalez and I talked about this on the blog right before the holidays. Check out that post for more detail, and subscribe if you’re interested in our observations about how AI is evolving: https://lnkd.in/gXasc3cM

  • Flurry Analytics reposted this

    View profile for Vikram Sreekanti

    Co-founder & CEO, RunLLM

    Trying new AI products is one of the most fun things to do, but it's always hard to find time. We finally made some time over the holidays to try Devin to see if it could help build RunLLM, and we thought it would be fun to share what we learned. As with all AI, it's early. What we found is that Devin is good at solving small, well-scoped bugs, especially in a frontend codebase. But it's a ways away from being able to tackle bigger tasks. It gets easily confused and skips critical steps. The promise of an AI software engineer is exciting, but we're not quite there yet. More detail in the full post below! 👇 https://lnkd.in/eQR7h3Mb

  • Flurry Analytics reposted this

    View organization page for Weights & Biases

    86,990 followers

    Evaluating LLMs: A Conversation with Joseph Gonzalez Our CEO and cofounder, Lukas Biewald, recently sat down with Joseph Gonzalez, EECS Professor at UC Berkeley and Co-Founder at RunLLM, to discuss the research he and his team have done on evaluating LLMs. Here are some of the highlights from this conversation: 🔹 Vibes-Based Model Evaluation Joseph introduced the concept of "vibes," which evaluates not just accuracy but also the style of a model’s response—whether it’s friendly, concise, or narrative-driven. This approach is transforming how LLMs are refined for human interaction. 👉 “Correctness is only part of the story—how a model communicates is just as critical. Llama is funnier and friendlier; OpenAI tends to be more formal and tends towards longer responses.” – Joseph Gonzalez 🔹 Chatbot Arena: A Global Benchmark for LLMs Chatbot Arena (lmarena.ai) lets users compare LLMs side-by-side, creating a community-driven leaderboard for open-source and commercial models. Using the Bradley-Terry approach to analyze pairwise comparisons, this initiative segments performance by tasks like creative writing, coding, or instruction following, helping developers optimize workflows for their specific application. 👉 “We want to democratize LLM evaluation—helping developers and the community improve models collaboratively.” – Joseph Gonzalez 🔹 Collaborative AI Evaluation and Development Joseph shared insights on how LLM evaluation is evolving to incorporate human feedback and community input, offering a deeper understanding of model strengths and weaknesses. This participatory approach ensures that LLMs meet user needs across diverse use cases and applications. 👉 “Human preference is about much more than accuracy—it’s about trust, interaction, and experience.” – Joseph Gonzalez 🎥 Check out the full episode to explore Joseph’s insights on advancing LLM evaluation, fostering community collaboration, and refining AI-human interactions. https://lnkd.in/exD3xSui

Similar pages

Browse jobs