Table of Contents Table of Contents Introduction How LinkedIn Works: A Big Picture Look at the Three Pillars Pillar 1: Training (OpenConnect) - The Factory That Builds the Engine Pillar 2: Reasoning (360Brew) - The Powerful New Engine Pillar 3: Serving (vLLM) - The System That Runs the Engine at Scale How LinkedIn’s Algorithms Actually Work Step 1: Candidate Generation (The Initial Longlist) Cross-Domain Graph Neural Networks (GNNs): The Holistic Scout Heuristics & Similarity Search: The Fast Scouts Step 2: The 360Brew Ranking & Reasoning Engine (The New L2 Ranker) What it is: An Expert on the Professional World How it Works: The Power of the Prompt The Core Mechanism: In-Context Learning & The Illusion of Live Learning Step 3: The Art of the Prompt - The New "Secret Sauce" The "Lost-in-Distance" Challenge: The AI's Attention Span History Construction: Building the Perfect Briefing, Automatically Bringing It All Together: A Look Inside the Prompt Step 4: Finalization, Diversity & Delivery Applying Final Business Rules: The Platform Guardrails Ensuring Feed Diversity: From Manual Rules to Automated Curation Delivery: Formatting for the Final Destination LinkedIn Profile Checklist for Marketers & Creators 1. Profile Photo & Background Photo 2. Headline 3. About (Summary) Section 4. Experience Section 5. Skills Section (Endorsements & Skill Badges) 6. Recommendations 7. Education, Honors & Awards, Certifications, etc. LinkedIn Content Pre-Launch Checklist for Creators I. Before You Post: Content Strategy & Creation 1. Topic Selection & Conceptual Alignment 2. Content Format Selection 3. Crafting High-Quality, Engaging Content II. As You Post: Optimizing for Discovery & Initial Engagement 4. Writing Compelling Copy & Headlines Learn more about Trust Insights at TrustInsights.ai p. 1​
5. Strategic Use of Hashtags 6. Tagging Relevant People & Companies (When Appropriate) III. After You Post: Fostering Engagement & Learning 7. Engaging with Comments Promptly & Thoughtfully LinkedIn Engagement Checklist for Marketers and Creators I. Quick Daily Engagements (5-15 minutes per day) 1. Reacting Strategically to Relevant Feed Content 2. Brief, Insightful Comments on 1-2 Key Posts II. Focused Daily/Regular Engagements (15-30 minutes per day/several times a week) 3. Participating Actively in 1-2 Relevant LinkedIn Groups 4. Sending Personalized Connection Requests III. More Involved Weekly/Bi-Weekly Engagements (30-60+ minutes per session) 5. Writing and Publishing LinkedIn Articles or Newsletters 6. Reviewing and Endorsing Skills for Connections LinkedIn Newsfeed Technologies I. Offline Ecosystem: AI Asset Generation & Training II. Real-Time Data Infrastructure III. Online Serving Funnel (Real-Time Inference) Advertisements About TrustInsights.ai Methodology and Disclosures Sources Learn more about Trust Insights at TrustInsights.ai p. 2​
Introduction If you've spent more than five minutes on LinkedIn in the last year, you've doubtlessly seen one or more "gurus" making definitive claims that they've “cracked the new algorithm.” They’ll tell you the magic number of comments to leave, the exact time to post, or the one type of content that gets "10x reach." Comment on their post within the first hour, they promise, and they’ll sell you the secret to boosting your performance on LinkedIn. For a long time, that advice, while often simplistic, was at least pointed in the right direction. It was based on the idea of a complex, multi-stage pipeline of machine learning models that processed signals. A like was a signal. A comment was a stronger signal. A keyword was a signal. The game was to send the best signals to a sophisticated but ultimately mechanical system. Our previous guides, including the Mid 2025 Edition, were designed to help you understand that very system. That's not how LinkedIn works anymore. The reality is that the slow, incremental evolution of LinkedIn's feed has been superseded by a sudden, fundamental revolution. The change is so profound that most existing advice, even from just 4 months ago, is now obsolete (for real, we deleted our guide from May). LinkedIn hasn’t just upgraded its engine; it has ripped out the entire mechanical assembly line and replaced it with a single, powerful, centralized brain. What that means for you and me is that the old game of “sending signals” is over. The new game is about having a conversation. There is no “hack” for a system that is designed to understand language, context, and reasoning in a way that mirrors human comprehension. This isn't just a new algorithm. It's an entirely new ecosystem, and it runs on a different fuel: language. What that also means is that if we understand this new ecosystem, how its central reasoning engine thinks, and what it values, we can align our efforts with the way it is built. This isn't hacking anything. This is learning how to communicate effectively. It's about moving from crafting signals for a machine to crafting a compelling narrative for an intelligent reader. So how do we do this? By listening, once again, to what LinkedIn has had to say. In this totally unofficial guide, still not at all endorsed by anyone at LinkedIn, we have synthesized a new wave of academic papers, engineering blogs, and conference presentations from LinkedIn's own AI researchers. We've used generative AI to boil down dozens of new Learn more about Trust Insights at TrustInsights.ai p. 3​
sources that describe this paradigm shift in detail. They’ve given this new system a name—360Brew—and have been surprisingly open about how it works. Each step of the process we outline details what we can do to best work with this new architecture. This guide is STILL not endorsed or approved by LinkedIn; no LinkedIn employees were consulted outside of using public data published by those employees. After the walkthrough of this new ecosystem, you'll find THREE updated toolkits you can copy and paste into your favorite generative AI tool. These checklists have been completely re-framed to align with this new, language-driven paradigm: The LinkedIn Profile Checklist​ Use this toolkit to transform your profile from a list of data points into a compelling dossier that communicates your expertise directly to the reasoning engine. The LinkedIn Content Pre-Launch Checklist​ Use this toolkit to craft content that is not just keyword-optimized, but is structured, well-reasoned, and conversation-starting—the exact qualities the new system is designed to identify and amplify. The LinkedIn Engagement Checklist​ Use this toolkit to guide your daily and weekly activities, helping you provide the highest-quality context signals that inform the AI's in-context learning and build a strong foundation for your visibility. The age of chasing algorithmic hacks is over. The age of clear, compelling, and valuable communication has begun. This guide will show you how to thrive in it. Got Questions? This guide comes with a NotebookLM instance you can interactively ask questions from: https://notebooklm.google.com/notebook/d7b059b9-ba79-4ade-a4fc-2d18d8e26588 Learn more about Trust Insights at TrustInsights.ai p. 4​
A Word From Our Sponsor, Us Ready to transform your AI marketing approach? Done By You services from Trust Insights: -​ Almost Timeless: 48 Foundation Principles of Generative AI: A non-technical AI book by cofounder and Chief Data Scientist Christopher Penn, Almost Timeless teaches you how to think about AI and apply it to your organization. Done With You services from Trust Insights: -​ AI-Ready Strategist: Ideal for CMOs and C-Suite leaders, the AI-Ready Strategist teaches you frameworks and methods for developing, deploying, and managing AI at any scale, from the smallest NGO to the largest enterprises with an emphasis on people, process, and governance. -​ Generative AI Use Cases for Marketers course: Learn the 7 major use case categories for generative AI in marketing with 21 different hands-on exercises, all data and prompts provided. -​ Mastering Prompt Engineering for Marketing course: Learn the foundation skills you need to succeed with generative AI including 3 major prompt frameworks, advanced prompting techniques, and how to choose different kinds of prompts based on the task and tool Done For You services from Trust Insights: -​ Customized consulting: If you love the promise of analytics, data science, and AI but don’t love the huge amount of work that goes into fulfilling that promise, from data governance to agentic AI deployment, let us do it for you. We’ve got more than a decade of real-world AI implementation (AI existed long before ChatGPT) built on your foundational data so you can reap the benefits of AI while your competitors are still figuring out how to prompt. -​ Keynote talks and workshops: Bring Trust Insights to your event! We offer customized keynotes and workshops for conferences, company retreats, executive leadership meetings, annual meetings, and roundtables. Every full-fee talk is customized to your event, industry, or company, and you get the talk recording and materials (transcripts, prompts, data) for your audience to work with and learn from. Learn more about Trust Insights at TrustInsights.ai p. 5​
The Shift from a Feature Factory to a Reasoning Engine To understand how radically things have changed, we first need to understand the old world of "features." For years, recommendation systems, including LinkedIn's, were built like intricate assembly lines. Your profile, your posts, and your activity were chopped up into thousands of tiny, distinct pieces of data called "features." A feature is a number, a category, a simple signal. For example: ●​ comment_count_last_24h = 12 ●​ is_in_same_industry = 1 (for yes) or 0 (for no) ●​ post_contains_keyword_AI = 1 ●​ author_seniority_level = 5 The old system took these thousands of numerical signals and fed them into a series of specialized models (our previous guide called them L0, L1, and L2 rankers). Each model in the pipeline was an expert at one thing: the L0 found thousands of potential posts, the L1 narrowed them down, and the L2 performed the final, sophisticated ranking. It was an industrial marvel of machine learning engineering, a "feature factory" that was incredibly good at optimizing for signals. But this approach had inherent limitations. It struggled with nuance. It couldn't easily understand a new job title it had never seen before. It required massive teams of engineers to constantly create and maintain these features, a process that created enormous technical debt. Most importantly, it was a system that processed numbers, not ideas. The new ecosystem throws that entire assembly line away. At its heart now sits a single, massive foundation model. A foundation model is a type of AI, like those powering ChatGPT or Claude, that is pre-trained on a colossal amount of text and data, enabling it to acquire a general understanding of language, concepts, and reasoning. LinkedIn's researchers took a powerful open-source foundation model (specifically, Mistral AI's Mixtral 8x22B model) and then subjected it to an intense, months-long "PhD program" focused exclusively on LinkedIn's professional ecosystem. They trained it on trillions of tokens of data representing the interactions between members, jobs, posts, skills, and companies. Learn more about Trust Insights at TrustInsights.ai p. 6​
The result is 360Brew. It is not a feature factory; it is a reasoning engine. It no longer asks, "What are the numerical features for this post?" Instead, the system itself writes a prompt, in plain English, that looks something like this: "Here is the complete profile of a Senior Product Manager at Salesforce. Here is their recent activity: they've commented on posts about product strategy and liked articles about market positioning. Here is a new post from a marketing expert about a novel go-to-market framework. Based on everything you know, predict the probability that this Product Manager will comment on this new post." This is not a signal-processing problem. This is a reading comprehension and reasoning problem. The model reads the text, understands the concepts, contextualizes the member's past behavior, and makes a nuanced prediction. It has moved from arithmetic to analysis. The New Premise: Your Success is Determined by Your Prose This fundamental shift changes everything for marketers, creators, and professionals on the platform. The quality of your visibility is no longer determined by how well you can feed signals into a machine, but by how compellingly you can articulate your value in plain language. Your text is the input. This new reality is built on two core capabilities of modern foundation models: 1.​ Zero-Shot Reasoning: 360Brew can understand and reason about concepts it hasn't been explicitly trained on. Because it understands language, it can see a new job title like "Chief Metaverse Officer" and infer its likely seniority, relevant skills, and industry context, even if it has never seen that exact title before. It no longer needs a feature called job_title_id = 9875. 2.​ In-Context Learning (ICL): This is perhaps the most crucial concept for you to understand. The model learns on-the-fly from the information provided directly in the prompt. A member's recent activity isn't just aggregated into a historical "embedding" over time; it's presented as a series of fresh examples in the moment. When the model sees that you just liked three posts about artificial intelligence, it learns, for that specific session, that you are highly interested in AI right now. It creates a temporary, personalized model for you, in real-time, based on the context it's given. Learn more about Trust Insights at TrustInsights.ai p. 7​
With this understanding, we can establish the new guiding principles for success on LinkedIn. Think of your presence on the platform as creating a dossier for this powerful reasoning engine. ●​ Your Profile is the Dossier's Executive Summary. The AI reads it. Your headline is not just a collection of keywords; it is the title of your professional story. Your About section is no longer optional filler; it is the abstract that provides the essential narrative and context for everything else. Your Experience descriptions are the evidence, the case studies that prove your expertise. The prose you use to describe your accomplishments is a direct, primary input to the ranking and recommendation engine. ●​ Your Content is the Case Study. Each post you create is a new piece of evidence presented to the engine. It is evaluated not just on its topic, but on its structure, clarity, and the value of its argument. A well-written, insightful post that clearly articulates a unique perspective is now, by its very nature, optimized for the system. The model is designed to identify and reward expertise, and the primary way it understands expertise is by analyzing the language you use to express it. ●​ Your Engagement is the Live Briefing. Your likes, comments, and shares are no longer just simple positive (+1) or negative (-1) signals. They are the examples you provide for in-context learning. When you leave a thoughtful comment on an expert's post, you are telling the AI, "This is the caliber of conversation I find valuable. This is part of my professional identity." You are actively curating the real-time examples that the model uses to understand you, making your engagement a powerful tool for shaping your own content distribution. Learn more about Trust Insights at TrustInsights.ai p. 8​
How LinkedIn Works: A Big Picture Look at the Three Pillars A revolutionary model like 360Brew, with its ability to read, reason, and understand language, doesn't just spring into existence. It is the end product of a colossal industrial and engineering effort. To think that you are simply interacting with "an algorithm" is like looking at a brand-new electric vehicle and calling it "a wheel." You are missing the vast, complex, and deeply interconnected ecosystem that makes it possible. To truly understand how to work with this new system, you can't just look at the engine; you have to understand the entire assembly line. The modern LinkedIn AI is built on three foundational pillars, a lifecycle that can be understood as: Build, Think, and Run. 1.​ The Factory (OpenConnect): The sophisticated, high-speed manufacturing plant where the AI models are built, trained, and continuously improved. 2.​ The Engine (360Brew): The powerful, centralized reasoning engine that, once built, does the "thinking"—analyzing your profile, content, and interactions to make decisions. 3.​ The Operating System (vLLM): The high-performance infrastructure that "runs" the engine at a global scale, serving real-time predictions to over a billion members. Understanding these three pillars will give you a complete picture of the forces shaping your visibility on the platform. It will move you from guessing at tactics to developing a durable strategy based on the fundamental principles of the entire ecosystem. Pillar 1: Training (OpenConnect) - The Factory That Builds the Engine Before a single recommendation can be made, the AI model itself must be built. This is a process of immense scale, involving the ingestion and processing of petabytes of data—the digital equivalent of all the books in the Library of Congress, multiplied many times over. The "factory" where this happens is LinkedIn's next-generation AI pipeline platform, called OpenConnect. To appreciate how significant this is, you have to understand the old factory it replaced. For years, LinkedIn's AI pipelines ran on a legacy system called ProML. While powerful for its time, it became the digital equivalent of a turn-of-the-century assembly line: slow, rigid, Learn more about Trust Insights at TrustInsights.ai p. 9​
and prone to bottlenecks. LinkedIn's own engineers reported that making a tiny change to a model—tweaking a single parameter—could require a full 15-minute rebuild of the entire workflow. Imagine if a car factory had to shut down and retool the entire assembly line just to change the color of the paint. The pace of innovation was throttled by the very tools meant to enable it. OpenConnect is the gleaming, modern gigafactory that replaced it. It was built on principles of speed, reusability, and resilience, and its design directly impacts how quickly the platform's AI can evolve. For you as a marketer or creator, a smarter factory means a smarter AI that learns and adapts faster. Here’s how it works: Core Principle 1: Standardized, Reusable Parts (The Component Hub) A modern factory doesn't build every single screw and bolt from scratch for every car. It uses standardized, high-quality components from trusted suppliers—a Bosch fuel injector, a Brembo brake system. OpenConnect does the same for AI. LinkedIn's platform and vertical AI teams have built a comprehensive library of reusable, pre-approved "components." These are standardized pieces of code for common tasks: a component for collecting data, a component for analyzing a model's performance, a component for processing text. Teams can then assemble these trusted components to build their unique AI pipelines. ●​ What this means for you: This ensures a level of quality and consistency across the entire platform. The AI component that understands your job title in the context of the feed is built from the same foundational block as the one that matches your profile to a job description. This shared understanding allows the AI to make more coherent connections about you across different parts of the platform. When one component is improved—for example, a better way to understand skills from text—that improvement can ripple across the ecosystem, making the entire system smarter at once. Core Principle 2: The Need for Speed (Decoupling and Caching) The biggest problem with the old factory was its speed. Everything was tangled together. A change in one area required rebuilding everything. OpenConnect solved this with two key innovations that are simple to understand. ●​ Decoupling: The new system isolates every component. Changing one part of the pipeline no longer requires rebuilding the whole thing. It’s like a modern pit crew: they can change a tire without having to touch the engine. Learn more about Trust Insights at TrustInsights.ai p. 10​
●​ Caching: The system intelligently saves the results of previous work. Once a component has been built, it's stored in a ready-to-use state (in a Docker image or a manifest file). When an engineer wants to run an experiment, the system doesn't rebuild everything from scratch; it pulls the pre-built components off the shelf and runs them immediately. LinkedIn reports that this new architecture reduced workflow launch times from over 14 minutes to under 30 seconds. ●​ What this means for you: This dramatic increase in speed for LinkedIn's engineers translates into a faster pace of innovation for you. An engineer who can run 20 experiments a day instead of one is an engineer who can test more ideas, find what works, and improve the AI much faster. This is why the feed, job recommendations, and other AI-driven features seem to be evolving at a breakneck pace. The factory is running at full speed, constantly running A/B tests and shipping improvements. The "algorithm" is no longer a static target; it's a rapidly evolving system, and this factory is the reason why. Core Principle 3: Unwavering Reliability (Disruption Readiness) An AI factory that processes petabytes of data is under immense strain. Servers need maintenance, networks can have hiccups, and hardware can fail. In the old world, a disruption like a server reboot could kill a multi-day training job, forcing engineers to start over from scratch, wasting days of work and computational resources. OpenConnect was designed for resilience. It uses a system of active checkpointing, which is a fancy way of saying it is constantly saving its work. During a long training process, the system automatically saves the model's parameters, its progress, and exactly where it was in the dataset. If a disruption occurs, Flyte (the open-source workflow engine at the heart of OpenConnect) simply restarts the job on a new set of servers, and it picks up from the last checkpoint. LinkedIn states this has reduced training failures due to infrastructure disruptions by 90%. ●​ What this means for you: The platform's core AI is more robust and reliable than ever. This stability allows LinkedIn to train even larger, more complex models with confidence, knowing that the process won't be derailed. This industrial-grade reliability is a prerequisite for building a foundation model as massive and critical as 360Brew. Learn more about Trust Insights at TrustInsights.ai p. 11​
Pillar 2: Reasoning (360Brew) - The Powerful New Engine After the OpenConnect factory has processed the data and assembled the training pipeline, it's time to build the engine itself: 360Brew. As we discussed, this is the centralized brain of the new ecosystem. It is a massive, 150B+ parameter foundation model that has been meticulously fine-tuned to become the world's leading expert on the professional economy as represented by LinkedIn's data. To understand how this engine "thinks," we need to look beyond its size and explore its architecture, its education, and its inherent limitations. Inside the Engine: A Boardroom of Experts (Mixture-of-Experts) 360Brew is based on the Mixtral 8x22B architecture, which uses a clever design called Mixture-of-Experts (MoE). This is a critical detail that makes a model of this scale feasible. Imagine a traditional AI model as a single, giant brain. To answer any question, the entire brain has to light up and work on the problem. This is computationally expensive. An MoE model, however, works more like a boardroom of specialized consultants. The Mixtral 8x22B model, for example, can be thought of as having 8 distinct "expert" networks. When a piece of text comes in (a "token"), the model's internal routing system intelligently selects the two most relevant experts to handle it. If the token is part of a marketing post, it might route it to the experts in business language and strategy. The other six experts remain dormant, saving energy. This design allows the model to have a colossal number of total parameters (giving it vast knowledge) while only using a fraction of them for any given task, making it far more efficient to train and run than a traditional model of equivalent size. ●​ What this means for you: The engine ranking your content is not a monolithic generalist; it's a team of specialists. This architecture allows for a much deeper and more nuanced understanding of different professional domains. It can apply its "engineering expert" to understand a technical post and its "sales expert" to understand a post about enterprise selling, leading to more accurate and context-aware evaluations. The Curriculum: A PhD in the Professional World Learn more about Trust Insights at TrustInsights.ai p. 12​
A base foundation model like Mixtral starts with a generalist "bachelor's degree" from reading the public internet. To turn it into 360Brew, LinkedIn put it through an intensive post-graduate program. This training happens in several key stages: 1.​ Continuous Pre-training (The Daily Reading): This is where the model ingests trillions of tokens of LinkedIn's proprietary data. It reads member profiles, job descriptions, posts, articles, and the connections between them. This is the phase where it moves beyond general internet knowledge and learns the specific language, entities, and relationships of the professional world. It learns that "PMP" is a certification, that "SaaS" is an industry, and that a "VP of Engineering" is more senior than a "Software Engineer." 2.​ Instruction Fine-Tuning (The Classroom Learning): After it has the knowledge, the model is taught how to use it. In this stage, it's trained on a dataset of questions and answers relevant to LinkedIn's tasks. It learns to follow instructions, like the ones in the prompts we've discussed. This is where it learns to take a request like "Predict whether this member will like this post" and understand the desired output format and reasoning process. 3.​ Supervised Fine-Tuning (The On-the-Job Training): Finally, the model is trained on millions of real-world examples with known outcomes. It's shown a member's profile, their history, a post, and then the correct answer: "this member did, in fact, comment." By training on countless examples, it refines its ability to predict future actions with high accuracy. ●​ What this means for you: The 360Brew engine has an incredibly deep and nuanced understanding of the professional world. Its knowledge is not superficial; it has been trained to understand the intricate web of relationships between skills, job titles, companies, and content that defines your professional identity. It understands that a comment from a recognized expert in your field is more significant than a like from a random connection. The Engine's Blind Spot: The "Lost-in-Distance" Problem No system is perfect, and this one has a fascinating and critical limitation that stems directly from its architecture. Researchers at LinkedIn (and elsewhere) discovered that while foundation models are great at handling long contexts, their performance suffers from a phenomenon they've termed "Lost-in-Distance." Think of it like reading a long and complex report. You are very likely to remember the main point from the introduction (the beginning of the context) and the final conclusion (the end of the context). However, if the report requires you to connect a subtle detail on page 32 Learn more about Trust Insights at TrustInsights.ai p. 13​
with another crucial detail on page 157, you might miss the connection. The two pieces of information are simply too far apart for you to easily reason across them. Foundation models have the same problem. When given a long prompt containing a member's interaction history, the model is very good at using information at the beginning and end of that history. But if two critical, related data points are separated by thousands of other tokens in the middle of the prompt, the model may struggle to connect them and its predictive accuracy can drop significantly. ●​ What this means for you: This is one of the most important technical limitations for a marketer to understand. It means that the structure and curation of the information presented to the AI are just as important as the information itself. When LinkedIn builds a prompt for 360Brew, their engineers cannot simply dump your entire chronological history into it. They must use intelligent strategies—like prioritizing your most recent interactions, or finding past interactions that are most similar to the content being ranked—and place that crucial information where the model is most likely to "see" it. This is why a clear, concise profile and well-structured content are so valuable: they make your key information easier for the model to find and reason with, overcoming its inherent blind spot. Pillar 3: Serving (vLLM) - The System That Runs the Engine at Scale Once the OpenConnect factory has built the 360Brew engine, there's one final, monumental challenge: how do you actually run it? A 150B+ parameter model is an incredible piece of technology, but it's also a computational behemoth. You can't just install it on a standard server. Running it for a single user is demanding; running it for hundreds of millions of active users in real-time is an engineering problem of the highest order. This is where the third pillar comes in: the serving infrastructure. If 360Brew is the Ferrari engine, the serving layer is the custom-built chassis, transmission, and fuel-injection system required to get that power to the road without it tearing itself apart. For this, LinkedIn has turned to a popular, high-performance open-source framework called vLLM. For a marketer, the deep technical details of vLLM (like its PagedAttention memory management) are not important. What is important are the implications of LinkedIn's choice to build its serving layer on top of a rapidly evolving, globally developed open-source project. Learn more about Trust Insights at TrustInsights.ai p. 14​
The Implication: A System in Constant Flux By adopting vLLM, LinkedIn's AI serving stack is not a static, internally developed piece of software that gets updated once or twice a year. It is a living, breathing system that benefits from the collective R&D of the entire global AI community. LinkedIn's own engineers have documented their journey through multiple versions of vLLM in just a matter of months, with each new version bringing significant performance improvements. They are not just consumers of this technology; they are active contributors, submitting their own performance optimizations back to the open-source project for everyone to use. ●​ What this means for you: This is the ultimate reason why chasing short-term "hacks" is a fool's errand. The very foundation on which the ranking engine runs is changing and improving on an ongoing basis. A loophole or quirk you might discover in the system's behavior at breakfast could be completely gone by lunch, not because a LinkedIn product manager decided to change it, but because the underlying vLLM engine received an update that changed how it schedules and processes requests on the GPU. This constant state of flux and improvement makes it impossible to "game" the system for long. The only durable strategy is to align with the core principles of the ecosystem: providing the highest quality textual information via your profile, creating valuable and well-reasoned content, and engaging in a way that builds a strong, coherent professional identity. These are the inputs that will always be valued, regardless of how the underlying technology evolves. Learn more about Trust Insights at TrustInsights.ai p. 15​
How LinkedIn’s Algorithms Actually Work Now that we understand the three pillars of the modern LinkedIn AI—the factory (OpenConnect), the engine (360Brew), and the operating system (vLLM)—we can explore how they work together to deliver the content you see in your feed. The old concept of a rigid, multi-stage funnel (L0, L1, L2) has been fundamentally transformed. The assembly line of specialized models has been replaced by a more dynamic and intelligent workflow centered around the powerful 360Brew reasoning engine. However, the core challenge remains the same: scale. Every day, billions of potential posts, updates, and articles are created across the platform. It is computationally impossible, even for an engine as efficient as 360Brew, to read and evaluate every single one of them for every member. The system still needs a way to intelligently narrow down this vast ocean of content to a small, manageable stream of the most promising items. Therefore, the funnel still exists, but it has been reimagined. It's no longer a series of ever-smarter filters; it's a two-stage process. First, an incredibly broad but efficient "scouting" system finds a few thousand promising candidates. Then, the expert "general manager"—360Brew—is brought in to do the deep, nuanced final evaluation. This section will walk you through this reimagined funnel step-by-step. For each stage, we will explain: ●​ What happens: A description of the technical process taking place. ●​ So what?: The direct implications for you as a marketer, creator, or professional. ●​ Now what?: Action-oriented guidance on how you can align your strategy with this stage of the process. Step 1: Candidate Generation (The Initial Longlist) This is the very top of the funnel, where the magic of personalization begins. The system's first task is to sift through the billions of potential posts in the LinkedIn universe and select a few thousand that might be relevant to you. This is a game of recall, not precision. The goal is not to find the single best post, but to ensure that the best post is somewhere in the initial pool of candidates. If your content doesn't make it into this initial "longlist," it has zero chance of being seen, no matter how good it is. Learn more about Trust Insights at TrustInsights.ai p. 16​
To accomplish this at an incredible speed, the system uses several efficient methods running in parallel. These methods are designed to be fast and broad, casting a wide net to pull in a diverse set of potential content. The two primary methods are Cross-Domain Graph Neural Networks and Heuristic-Based Retrieval. Cross-Domain Graph Neural Networks (GNNs): The Holistic Scout ●​ What happens:​ LinkedIn maintains a colossal, constantly updated map of its entire professional ecosystem, known as the Economic Graph. This isn't just a list of members and companies; it's a complex web of interconnected nodes and edges representing every entity and every interaction: you (a node) are connected to your company (a node) with an "works at" edge; you are connected to another member with a "connection" edge; you are connected to a post with a "liked" edge.​ A Graph Neural Network (GNN) is a specialized type of AI designed to learn from this very structure. It can "walk" the graph, learning patterns from the relationships between nodes. The most significant evolution here is that LinkedIn's GNN is now cross-domain.​ Previously, a GNN might have been trained just on Feed data to recommend Feed content. The new Cross-Domain GNN is holistic. It ingests and learns from your activity across the entire platform. It sees the jobs you click on, the notifications you open, the influencers you follow in your email digests, the skills you endorse, and the articles you share. It then uses this complete, 360-degree view of your professional interests to find potential content. For example, if you've recently started clicking on job postings for "Product Marketing Manager," the GNN learns that you are interested in this topic. It can then walk the graph to find high-quality posts, articles, and discussions about product marketing, even if you've never explicitly engaged with that topic in the feed before. It uses your behavior in one domain (Jobs) to inform its recommendations in another (the Feed). ●​ So what?:​ This means your professional identity on LinkedIn is no longer siloed. The system is building a single, unified understanding of you based on the totality of your actions. Every click, every follow, every job application is a piece of data that refines your "member embedding"—your unique digital fingerprint on the Economic Graph. The system is constantly trying to answer the question, "Based on everything this member does on our platform, what are they truly interested in professionally?" Your content gets pulled into the longlist if its own "graph neighborhood"—the topics, Learn more about Trust Insights at TrustInsights.ai p. 17​
skills, and people it's connected to—strongly overlaps with a member's holistic interests. ●​ Now what?:​ Your goal is to create a clear, consistent, and coherent professional identity across the entire platform, not just in your posts. ○​ Build a Relevant Network: Your connections are a primary signal. Connect with professionals in your target industry and with individuals who engage with the kind of content you create. When your connections engage with your content, it signals to the GNN that your post is relevant to that specific "graph neighborhood," increasing its chances of being shown to their connections (your 2nd and 3rd-degree network). ○​ Maintain Your Profile as Your Professional Hub: The skills listed on your profile, the job titles you've held, and the companies you've worked for are powerful, stable nodes in the graph. The GNN uses this information as an anchor for your identity. If your profile clearly states your expertise in "B2B SaaS Marketing," the GNN is far more likely to identify your content on that topic as relevant. ○​ Engage Authentically Beyond the Feed: Your activity is not just about feed engagement. Clicking on a job ad, following a company, or even watching a LinkedIn Learning video are all signals that the Cross-Domain GNN uses. Engage with the platform in a way that authentically reflects your professional interests and goals. This holistic activity provides the rich data the GNN needs to understand who you are and, by extension, who your content is for. Heuristics & Similarity Search: The Fast Scouts While the GNN is incredibly powerful for understanding deep relational patterns, it's also computationally intensive. To supplement it, the system uses several faster methods to ensure the longlist is filled with timely, fresh, and obviously relevant content. ●​ What happens:​ This stage uses a combination of simpler, rule-based methods (heuristics) and efficient search techniques to quickly find candidates. ○​ Heuristic-Based Retrieval: These are simple, common-sense rules that can be executed at massive scale with very low latency. Examples include: Learn more about Trust Insights at TrustInsights.ai p. 18​
■​ Timeliness: Show very recent posts from a member's direct connections. ■​ Recent Interaction: If a member just commented on one of your posts, the system is more likely to pull your next post into their longlist. ■​ Velocity: Posts that are gaining unusually high engagement (likes, comments) very quickly are flagged and pulled into more longlists to see if they have viral potential. ○​ Similarity Search (Embedding-Based Retrieval): This is a more sophisticated but still incredibly fast method. As we've discussed, every piece of content and every member has an "embedding" or digital fingerprint. The system can take your member embedding and, in a fraction of a second, search a massive database (using specialized technology like FAISS or ScaNN) for posts with the most similar embeddings. "Similar" can mean many things: similar topics, similar style, or content liked by members with similar profiles to yours. This allows the system to find topically relevant content even if it's from creators you're not connected to. ●​ So what?:​ This means that speed and topical clarity are crucial factors for getting into the initial longlist. While the GNN looks at your deep, long-term identity, these faster methods are focused on the "here and now." A well-timed post on a trending topic, or one that gets a quick burst of initial engagement, can leverage these heuristics to get a significant initial boost in visibility. Similarly, content that has a very clear and distinct topical focus will have a "sharper" embedding, making it easier for the similarity search to find and match it with the right audience. ●​ Now what?:​ Your strategy here should be focused on creating clear, timely content and fostering immediate engagement. ○​ Be Clear and Specific: When you write a post, have a single, clear topic in mind. A post about "The Impact of AI on B2B SaaS Go-to-Market Strategy" will generate a much more distinct and matchable embedding than a vague post about "The Future of Business." Avoid muddled or overly broad topics in a single post if you want to be found via similarity search. ○​ Encourage Quick Engagement: The first hour after you post is still critically important. This is the window where you can trigger the velocity-based heuristics. Encourage discussion by asking questions. Be present to reply to comments immediately. This initial flurry of activity is a powerful signal that your content is resonating, which can significantly increase its initial reach. Learn more about Trust Insights at TrustInsights.ai p. 19​
○​ Engage Authentically with Others: The recent interaction heuristic is a two-way street. When you thoughtfully engage with content from others in your target audience, you are increasing the probability that the system will show your next post to them. Authentic engagement is not just about building relationships; it's a direct technical signal to the candidate generation system. ○​ Tap into Trending Topics (When Relevant): If there is a significant conversation happening in your industry, creating a timely and insightful post on that topic can leverage the system's ability to identify and boost trending content. Don't force it, but when a topic aligns with your expertise, timeliness can be a powerful amplifier. By understanding this first crucial step, you can see that getting visibility is not about a single magic bullet. It's about building a strong, coherent professional identity (for the GNN) while also creating clear, timely, and engaging content (for the faster retrieval methods). If you can successfully align your efforts with both of these systems, you will maximize your chances of getting your content into the initial longlist—the gateway to the powerful reasoning of the 360Brew engine. Step 2: The 360Brew Ranking & Reasoning Engine (The New L2 Ranker) Once the wide net of Candidate Generation has pulled in a few thousand promising posts, the system moves to the second and most critical stage of the reimagined funnel. This is where the brute force of recall gives way to the surgical precision of relevance. The thousands of candidates are now handed over to the new centerpiece of LinkedIn's AI ecosystem: the 360Brew Ranking & Reasoning Engine. If the previous step was about scouting for potential, this step is the final, in-depth interview. Each candidate post is rigorously evaluated against your specific profile and recent behavior to answer one fundamental question: "Out of all these options, which handful of items are the most valuable to this specific member, right now?" This is the new "L2 Ranker," and its arrival marks the single greatest shift in how LinkedIn's feed works. The old L2 ranker, a model named LiGR, was a marvel of traditional machine learning—a highly specialized Transformer model trained to find complex patterns in numerical features. 360Brew is a different species entirely. It does not think in numbers and features; it thinks in language and concepts. Learn more about Trust Insights at TrustInsights.ai p. 20​
What it is: An Expert on the Professional World ●​ What happens:​ 360Brew is a massive foundation model with over 150 billion parameters, built upon the powerful open-source Mixtral 8x22B Mixture-of-Experts (MoE) architecture. As we discussed in Part 1, this means it's not just one giant AI brain but a "boardroom" of specialized expert networks, making it incredibly knowledgeable and efficient.​ But its power doesn't come from its base architecture alone. Its true expertise comes from its education. After starting with a general understanding of the world from reading the public internet, LinkedIn put it through an intensive, multi-stage fine-tuning process fed by trillions of tokens of proprietary data.​ This process imbued 360Brew with a deep, nuanced understanding of the professional world that no generic AI could ever possess. It learned the intricate relationships between job titles, skills, industries, companies, and seniority levels. It learned the subtle differences in language between a software engineer and a product manager, or between a sales leader and a marketing executive. It learned to identify credible expertise, to understand conversational dynamics, and to recognize the markers of a valuable professional discussion. It is, for all intents and purposes, the world's foremost expert on the LinkedIn Economic Graph, capable of reading and interpreting its data with near-human comprehension. ●​ So what?:​ The engine evaluating your content is no longer a pattern-matching machine looking for statistical correlations. It is an expert reader with deep domain knowledge. This has profound implications. A system that understands concepts can see beyond superficial keywords. It can understand that a post about "reducing customer acquisition cost" is highly relevant to a VP of Sales, even if the post never uses the word "sales." It can recognize that a detailed, well-structured argument from a known expert is more valuable than a shallow, clickbait post, even if the latter uses more trendy hashtags. The bar for quality has been raised. Your content is no longer being judged by a machine on its signals, but by an expert on its substance. ●​ Now what?:​ You must shift your mindset from creating "content that the algorithm will like" to creating "content that an expert in your field would find valuable." ○​ Write for an Intelligent Audience: Assume the reader (the AI) is the smartest person in your industry. Avoid jargon for jargon's sake, but don't shy away Learn more about Trust Insights at TrustInsights.ai p. 21​
from using precise, professional language. Explain complex topics with clarity and depth. The model is trained to recognize and reward genuine expertise, which is demonstrated through the quality and coherence of your writing. ○​ Demonstrate, Don't Just Declare: Don't just put "Marketing Expert" in your headline. Demonstrate that expertise in your content. Share unique insights, provide a contrarian (but well-reasoned) take on a popular topic, or create a detailed framework that helps others solve a problem. 360Brew is designed to evaluate the substance of your contribution, not just the labels you attach to it. ○​ Focus on Your Niche: An expert model respects niche expertise. A deep, insightful post for a specific audience (e.g., "A Guide to ASC 606 Revenue Recognition for SaaS CFOs") is now more likely to be identified and shown to that exact audience than ever before. The model's deep domain knowledge allows it to perform this highly specific matchmaking with incredible accuracy. Don't be afraid to go deep; the system can now follow you there. How it Works: The Power of the Prompt ●​ What happens:​ The most revolutionary aspect of 360Brew is how it receives information. It does not ingest a long list of numerical features. Instead, for each of the thousands of candidate posts, the system constructs a detailed prompt in natural language. This prompt is a dynamically generated briefing document, a bespoke dossier created for the sole purpose of evaluating a single post for a single member.​ Based on the research papers, this prompt is assembled from several key textual components: ○​ The Instruction: A clear directive given to the model, such as, "You are provided a member's profile, their recent activity, and a new post. Your task is to predict whether the member will like, comment on, or share this post." ○​ The Member Profile: The relevant parts of your profile, rendered as text. This includes your headline, your current and past roles, and likely key aspects of your About section. ○​ The Past Interaction Data: A curated list of your most recent and relevant interactions on the platform, also rendered as text. For example: "Member has commented on the following posts: [Post content by Author A]... Member has liked the following posts: [Post content by Author B]..." Learn more about Trust Insights at TrustInsights.ai p. 22​
○​ The Question: The candidate post itself, including its text, author, and topic information. ○​ The Answer: The model's task is to generate the answer, predicting your likely action. ●​ The engine reads this entire document, from top to bottom, for every single candidate post it evaluates for you. ●​ So what?:​ This means that every ranking decision is a fresh, context-rich evaluation. The system is not just matching your static profile against a static post. It is performing a holistic analysis that takes into account your identity, your immediate interests, and the content of the post in a single, unified comprehension task. This is why the quality of the text on your profile and in your content has become the most critical factor for success. Poorly written, unclear, or keyword-stuffed text creates a muddled, low-quality prompt, which will lead to a poor evaluation. Clear, compelling, and well-structured text creates a high-quality prompt that allows the engine to make a more accurate and favorable assessment. ●​ Now what?:​ You must treat every piece of text you create on LinkedIn as a direct input for these prompts. Your goal is to make the dossier about you as clear, compelling, and impressive as possible. ○​ Optimize Your Profile's Narrative: Go back and read your headline, About section, and experience descriptions out loud. Do they tell a coherent and compelling story of your professional value? An AI that reads for a living will be able to tell the difference between a thoughtfully crafted narrative and a jumbled list of buzzwords. ○​ Craft Your Posts for Readability: Structure your posts for clarity. Use short paragraphs, bullet points, and bolding to make your key points easy to parse. A well-structured post is not just easier for humans to read; it's easier for a language model to comprehend and evaluate. ○​ Be Deliberate with Your Language: The words you choose matter more than ever. The engine understands semantic nuance. Writing with precision and authority will be interpreted as a signal of expertise. This doesn't mean using overly complex vocabulary; it means using the right vocabulary for your domain clearly and effectively. Learn more about Trust Insights at TrustInsights.ai p. 23​
The Core Mechanism: In-Context Learning & The Illusion of Live Learning ●​ What happens:​ This is where we must address the most profound and often misunderstood aspect of this new ecosystem. How does the system adapt to your behavior so quickly? The answer is In-Context Learning (ICL), and it works very differently from how you might think.​ The old system achieved "freshness" by constantly updating data points. Specialized systems would run every few minutes or hours to recalculate features like "likes on this post in the last hour." The core ranking model itself was retrained less frequently, perhaps daily. The model’s knowledge was relatively static, but the data fed to it was always fresh.​ The new system inverts this entirely. The core 360Brew model is now the most static part of the equation. Training a 150B+ parameter model is a monumental task, taking weeks or months. The model’s internal knowledge—its frozen weights and parameters—is not updated in real-time. It is not "learning" from your clicks in the sense that a student learns and permanently updates their knowledge.​ Instead, the system's dynamism comes from the prompt itself. The prompt is assembled from scratch, in real-time, for every single ranking calculation. The crucial "Past Interaction Data" section is a live query to a database of your most recent actions. This is how the system adapts.​ Think of 360Brew as a world-class consultant with a fixed, encyclopedic knowledge base. ○​ The Old Way: You would give the consultant a spreadsheet of data (the features) that you updated every hour. The consultant's advice would be fresh because the data was fresh. ○​ The New Way: You give the consultant the same, static encyclopedia of knowledge (the frozen model). But for every single question, you also hand them a one-page, up-to-the-second briefing document (the dynamic prompt) that says, "Here’s what’s happened in the last five minutes." ●​ The consultant's fundamental knowledge doesn't change, but by conditioning their expertise on the immediate context of the briefing document, their answer is perfectly tailored to the present moment. This is In-Context Learning. The model "learns" temporarily, for the duration of a single thought process, from the examples you provide in the prompt. ●​ So what?:​ This has two massive consequences. First, the LinkedIn feed is now hyper-responsive. Your immediate interests, as demonstrated by your very last few actions, can have a significant impact on what you see next. The system is always Learn more about Trust Insights at TrustInsights.ai p. 24​
trying to model your current "session intent." If you spend five minutes engaging with content about product-led growth, the system will instantly prioritize more of that content for you because your actions have rewritten the "Past Interaction Data" for the next prompt.​ Second, your engagement is far more than just a simple signal of approval. Each like, comment, or share is an active contribution to the live briefing document that defines you in that moment. You are not just providing a historical data point for a model to be trained on next week; you are actively feeding the model examples of what you find valuable right now, and it is using those examples to reason about the very next piece of content it shows you. You are, in a very real sense, a co-creator of your own feed's logic. ●​ Now what?:​ Your engagement strategy must become as deliberate and strategic as your content strategy. You are constantly providing the live context that steers the AI. ○​ Engage with Aspiration: This is the most powerful tactic in the new ecosystem. Actively seek out and engage with content from the experts, companies, and communities you want to be associated with. When you leave a thoughtful comment on a post by a leader in your field, you are providing a powerful, in-context example to the AI: "This is the conversation I belong in. Consider me in this context." This action directly influences how the model perceives and ranks content for you and, by extension, how it ranks your content for others in that same context. ○​ Curate Your Context: Your feed is a reflection of the examples you provide. Don't be afraid to use the "I don't want to see this" option or to unfollow connections whose content is not relevant to you. Muting or hiding content is a powerful signal that helps clean up your "Past Interaction Data." This ensures the examples the system learns from are of high quality, leading to a more refined and relevant feed over time. A noisy, unfocused history will lead to noisy, unfocused recommendations. ○​ Warm Up the Engine: Before you post an important piece of content, take 10-15 minutes to "warm up" the system. Engage with several high-quality posts on the same topic as the one you are about to publish. This pre-loads your "Past Interaction Data" with highly relevant, recent examples. It attunes the in-context learning mechanism to your immediate area of focus, effectively telling the system, "Pay attention to this topic right now." This can provide a meaningful edge in the crucial first hour of your post's life, ensuring it's evaluated in the most favorable context possible for your network. Learn more about Trust Insights at TrustInsights.ai p. 25​
By understanding the 360Brew engine—what it is, how it works through dynamic prompting, and its core mechanism of in-context learning—you can finally move beyond the world of algorithmic hacks. You can stop asking "How do I please the machine?" and start asking "How do I best communicate my value?" In this new ecosystem, the answer to both questions is finally, and powerfully, the same. Step 3: The Art of the Prompt - The New "Secret Sauce" We've established that the modern LinkedIn AI runs on prompts. For every ranking decision, the system assembles a unique, detailed briefing document for its 360Brew reasoning engine. This shift from features to prompts is the core of the new paradigm. But this raises a crucial, and surprisingly complex, question: what makes a good prompt? This isn't a trivial matter. When your briefing document can be thousands of words long—containing your entire profile and a long list of recent interactions—its structure becomes just as important as its content. Anyone who has used a tool like ChatGPT knows this intuitively. Asking a question in a clear, well-structured way yields a much better answer than a rambling, disorganized query. For LinkedIn, a system that constructs billions of these prompts every day, this is a multi-million dollar engineering challenge. Their researchers have published detailed studies on a fascinating limitation of all large language models, a problem we can think of as the AI's "attention span." Understanding this limitation is the key to understanding the new "secret sauce" of the platform. It will change how you think about your content, your profile, and even the order of your sentences. This stage of the funnel is all about History Construction—the art of building the most effective prompt to overcome the engine's inherent blind spots. And while you cannot directly write the prompts that are sent to 360Brew—they are assembled programmatically in fractions of a second—you have absolute control over the quality of the raw materials the system uses to build them. The "Lost-in-Distance" Challenge: The AI's Attention Span ●​ What happens:​ Large language models, for all their power, have a cognitive limitation that mirrors Learn more about Trust Insights at TrustInsights.ai p. 26​
our own. It's a phenomenon that LinkedIn's researchers, in a paper titled "Lost-in-Distance," have documented in detail.​ Imagine you are given a 50-page report and asked to answer a complex question that requires you to connect two key facts. If those two facts are in the same paragraph on page 2, the task is easy. If one fact is in the executive summary on page 1 and the other is in the final recommendations on page 50, it's still relatively straightforward. But what if one crucial detail is buried in a footnote on page 17, and the other related detail is hidden in a data table on page 42? You'd likely struggle to make the connection. The two pieces of information are simply too far apart—they are "lost in distance" from each other.​ LLMs suffer from the exact same problem. When the system constructs a long prompt containing your profile and a detailed history of your interactions, the model's ability to reason effectively depends on the proximity of relevant information. It is very good at using information at the very beginning of the prompt (your profile) and at the very end (the candidate post it's evaluating). However, its ability to cross-reference and connect two related pieces of information degrades significantly as the distance between them in the prompt increases.​ Let's use a practical example. Suppose the system is trying to decide whether to show you a post about "Sustainable Finance" from a VP at Goldman Sachs. The prompt might contain the following pieces of information about you, scattered among dozens of other interactions: ○​ Near the beginning: Your profile headline says "ESG Investing Professional." ○​ Buried in the middle: You liked a post about "Impact Investing" three weeks ago. ○​ Also in the middle: Your work history shows you once worked at Goldman Sachs. ●​ For an effective prediction, the AI needs to connect all three of these points: you are an ESG professional, you are interested in a related topic, and you have an affinity for the author's company. If these facts are separated by thousands of tokens of other, less relevant activity, the model may fail to connect the dots. Its performance will degrade. The signal will be lost in the noise. ●​ So what?:​ This means that the structure of the prompt is a critical, hidden ranking factor. The way LinkedIn's systems choose to order your interaction history can dramatically impact the outcome of the final ranking. This is no longer a simple chronological feed. It's a carefully curated narrative, assembled in the moment to be as persuasive and easy-to-understand for the 360Brew engine as possible. The system isn't just a data fetcher; it's a programmatic prompt engineer. It actively works to put the most important information "front and center" where the AI is most likely to see and Learn more about Trust Insights at TrustInsights.ai p. 27​
use it effectively. For you as a creator, this is a profound realization: the system isn't just evaluating your content, it's building a case for it, and the quality of your textual inputs determines how strong that case can be. ●​ Now what?:​ Your job is to make it incredibly easy for the system's prompt engineer to find your most important information and build a compelling case for you. You need to create text that is "prompt-friendly." ○​ Front-load Your Value: This is the most direct application of the "Lost-in-Distance" principle. Place your most important keywords, value propositions, and job titles at the beginning of every text field. This is the information that is most likely to be placed at the top of the prompt's context, where the AI's attention is strongest. ■​ In your Headline: "Data-Driven Marketing Leader | B2B SaaS | AI & Analytics" is better than "Helping Companies Grow with Marketing." The first is dense with key entities placed at the start. ■​ In your About Section: Start with a powerful, one-sentence summary that encapsulates who you are and what you do. "I am a product marketing executive with 15 years of experience leading go-to-market strategies for high-growth SaaS companies" is a perfect opening line. Don't bury your core expertise in the third paragraph behind a lengthy personal story. ■​ In your Posts: Your first sentence is the most valuable real estate you have. It must hook the reader and clearly signal the topic and value of the post. This ensures that even in a truncated view or as an item in a list, the core message is at the "top" of that piece of context. ○​ Create "Dense" Signals of Expertise: Instead of scattering your skills and interests across many low-effort posts, concentrate them. A single, well-written, in-depth article on a niche topic is a much "denser" and more powerful signal than twenty vague, unrelated updates. This creates a strong, self-contained piece of context that the prompt engineer can easily pull and feature as a prime example of your expertise. ○​ Maintain a Coherent Narrative: A profile where the headline, summary, and experience all tell the same professional story is easier for the model to understand. This consistency reduces the cognitive load on the AI, as it doesn't have to reconcile contradictory or widely dispersed signals to figure out who you are. This coherence makes your professional "dossier" much easier to read and interpret. Learn more about Trust Insights at TrustInsights.ai p. 28​
History Construction: Building the Perfect Briefing, Automatically ●​ What happens:​ Knowing that the 360Brew engine suffers from the "Lost-in-Distance" problem, LinkedIn's systems cannot simply dump a member's entire chronological history into the prompt. Doing so would be inefficient and ineffective. Instead, the system must act as an intelligent, automated editor, programmatically curating a history that is most likely to lead to an accurate prediction.​ This is the core of the system's own "secret sauce." You, the user, cannot directly influence this process. You can't tell the system, "Hey, for this next post, please use a similarity-based history." This all happens behind the scenes, governed by sophisticated engineering and machine learning. Based on the research papers and an understanding of the problem, the system likely employs several automated history construction strategies, choosing the best one for the specific task at hand. ○​ Chronological History: The most straightforward approach is to order your interactions from oldest to newest. This is useful for tasks where understanding your evolving journey or the sequential nature of a conversation is important. However, as we know, this can fall victim to the "Lost-in-Distance" problem if your most relevant information is chronologically old. ○​ Recency-Weighted History: A simple but effective modification is to heavily prioritize your most recent interactions. The system's logic automatically gives far more weight to your activity from the last hour than your activity from last week, placing it more prominently in the prompt. This is the mechanism behind the system's hyper-responsiveness. ○​ Similarity-Based History: This is a much more powerful and computationally intensive strategy. When evaluating a candidate post for you, the system can first perform a quick search of your entire interaction history to find past posts that are most semantically similar to the candidate. It might find three posts you liked on the same topic, or two articles you shared from the same author. The system's logic then takes these highly relevant historical examples and places them at the top of the "Past Interaction Data" section of the prompt. It's like a lawyer's assistant automatically finding and highlighting the three most relevant case precedents for a new legal brief. It directly overcomes the "Lost-in-Distance" problem by programmatically moving the most relevant information close together. Learn more about Trust Insights at TrustInsights.ai p. 29​
○​ Priority-Based History: The system understands that not all interactions are created equal. A thoughtful comment is a far stronger signal of interest than a passive view. A "share with comment" is more significant than a simple "like." The system can be programmed with a priority dictionary, ensuring that these high-intent actions are automatically given preferential placement within the prompt, regardless of when they occurred. ●​ In a live production environment, the system is likely using a hybrid of these strategies, with its own models dynamically choosing the best combination based on you, the content, and the specific prediction task. This curation is fully automated. ●​ So what?:​ The personalization you experience on LinkedIn is not a passive reflection of your history; it is an active, curated, and machine-generated interpretation of your history. The system is constantly making automated editorial decisions about what aspects of your professional identity are most relevant in this moment. This means that the quality and clarity of your past actions have a compounding effect. A history filled with clear, high-intent engagements on a coherent set of topics gives the programmatic prompt engineer a wealth of powerful evidence to work with. A history of vague, low-effort, or scattered engagement provides weak evidence, resulting in a less persuasive prompt and, consequently, less relevant recommendations. You control the quality of the ingredients; the system controls the recipe. ●​ Now what?:​ You can't control the automated recipe, but you have 100% control over the quality of the ingredients you provide. Your goal is to fill your historical record with the highest-grade raw materials for the prompt engineer to use. ○​ Prioritize High-Intent Engagement: A single thoughtful comment is worth more than a dozen mindless likes. When you engage, aim for depth. Add value to the conversation. Ask insightful questions. Share a post with your own unique take. These actions are the "priority" ingredients that the history construction algorithm is designed to find and elevate. ○​ Build a Thematically Consistent History: While it's fine to have diverse interests, your core professional engagement should be thematically consistent. If you are an expert in cybersecurity, a significant portion of your high-intent engagement should be on cybersecurity topics. This creates a dense cluster of similar, high-quality interactions, making it easy for the similarity-based history constructor to find powerful examples to put in your prompt's In-Context Learning section. Learn more about Trust Insights at TrustInsights.ai p. 30​
○​ Don't Just Consume, Create and Converse: The system is trying to understand you as a professional, and professionals are active participants in their field. A history that only shows you passively "liking" content is less informative than a history that shows you creating content, starting conversations, and adding your voice to existing ones. The latter provides much richer context for the AI to reason with, giving the prompt engineer better material to build its case. Bringing It All Together: A Look Inside the Prompt So far, we've discussed the theory behind the 360Brew engine and the art of automated prompt construction. Now, let's make it concrete. What does one of these briefing documents, assembled in a fraction of a second for a single ranking decision, actually look like? Based on the structure and syntax revealed in LinkedIn's own research papers, we can infer the format. The examples below are fictional, but they are constructed to be faithful to the principles and components we've discussed. They are your first look "under the hood" at the new operating logic of the LinkedIn feed. As you read them, notice how the different textual elements from a member's profile and activity are woven together into a single, coherent document for the AI to analyze. Example 1: Prompt to Predict a "Like" on a Marketing Strategy Post Instruction:​ You are provided a member's profile and a set of posts, their content, and interactions that the member had with the posts. For each past post, the member has taken one of the following actions: liked, commented on, shared, viewed, or dismissed.​ Your task is to analyze the post interaction data along with the member's profile to predict whether the member will like, comment, share, or dismiss a new post referred to as the "Question" post. Note:​ Focus on topics, industry, and the author's seniority more than other criteria. In your calculation, assign a 30% weight to the relevance between the member's profile and the post content, and a 70% weight to the member's historical activity. Learn more about Trust Insights at TrustInsights.ai p. 31​
Member Profile:​ Current position: Senior Content Marketing Manager, current company: HubSpot, Location: Boston, Massachusetts. Past post interaction data:​ Member has commented on the following posts: [Author: Ann Handley, Content: 'Great content isn't about storytelling; it's about telling a true story well. In B2B, that means focusing on customer success...', Topics: content marketing, B2B marketing]​ Member has liked the following posts: [Author: Christopher Penn, Content: 'Ran the numbers on the latest generative AI model's impact on SEO. The results are surprising... see the full analysis here...', Topics: generative AI, SEO, marketing analytics]​ Member has dismissed the following posts: [Author: Gary Vaynerchuk, Content: 'HUSTLE! There are no shortcuts. Stop complaining and start doing...', Topics: entrepreneurship, motivation] Question:​ Will the member like, comment, share, or dismiss the following post: [Author: Rand Fishkin, Content: 'Everyone is focused on AI-generated content, but the real opportunity is in AI-powered distribution. Here's a framework for thinking about it...', Topics: marketing strategy, AI, content distribution] Answer:​ The member will like Analysis of the Prompt: In this first example, you can see the core components in action. The Member Profile establishes a clear identity ("Senior Content Marketing Manager"). The Past post interaction data provides powerful in-context examples: the member engages with industry leaders (Ann Handley, Christopher Penn) on core topics (content marketing, AI, SEO) but dismisses generic motivational content. The Question presents a post that is a perfect topical and conceptual match. The engine reads this entire narrative and, using its reasoning capabilities, correctly predicts a high-intent action like a "like." Example 2: Prompt to Predict a "Comment" on a Product Marketing Post Instruction:​ You are provided a member's profile and a set of posts, their content, and interactions that the member had with the posts. For each past post, the member has taken one of the Learn more about Trust Insights at TrustInsights.ai p. 32​
following actions: liked, commented on, shared, viewed, or dismissed.​ Your task is to analyze the post interaction data along with the member's profile to predict whether the member will like, comment, share, or dismiss a new post referred to as the "Question" post. Note:​ Focus on topics, industry, and the author's seniority more than other criteria. In your calculation, assign a 30% weight to the relevance between the member's profile and the post content, and a 70% weight to the member's historical activity. Member Profile:​ Current position: Director of Product Marketing, current company: Salesforce, Location: San Francisco, California. Past post interaction data:​ Member has commented on the following posts: [Author: Avinash Kaushik, Content: 'Most analytics dashboards are data pukes. I'm challenging you to present just ONE metric that matters this week. What would it be?', Topics: data analytics, marketing metrics]​ Member has shared with comment the following posts: [Author: Joanna Wiebe, Content: 'Just released a new case study on how a simple copy tweak increased conversion by 45%. The key was changing the call to value, not call to action...', Topics: copywriting, conversion optimization]​ Member has liked the following posts: [Author: Melissa Perri, Content: 'Product strategy is not a plan to build features. It's a system of achievable goals and visions that work together to align the team around what's important.', Topics: product management, strategy] Question:​ Will the member like, comment, share, or dismiss the following post: [Author: April Dunford, Content: 'Hot take: Most companies get their positioning completely wrong because they listen to their customers instead of observing their customers. What's the biggest positioning mistake you've seen?', Topics: product marketing, positioning, strategy] Answer:​ The member will comment Analysis of the Prompt: This second example illustrates a more nuanced prediction. The historical data shows a pattern of not just liking, but actively commenting and sharing with comment, particularly on posts that ask questions or present strong opinions. The Question itself, from a known Learn more about Trust Insights at TrustInsights.ai p. 33​
expert (April Dunford), is designed to elicit a response by asking a direct question. The 360Brew engine, by reading this context, can infer that this member's pattern of behavior goes beyond simple approval. It recognizes the prompt for what it is—an invitation to a professional conversation—and correctly predicts the higher-intent action: a "comment." These examples reveal the new reality of LinkedIn. Your success is no longer a game of numbers and signals, but a matter of narrative and context. The strength of the case presented in these prompts is directly determined by the quality of the text you provide in your profile, your content, and your engagement. The following checklists are designed to help you make that case as compelling as possible. Of course. Let's write the final step of the reimagined funnel. This section will cover the crucial "last mile" processes that turn 360Brew's raw predictions into the final feed you see on your screen, integrating important concepts from the old guide that remain highly relevant. Step 4: Finalization, Diversity & Delivery After the 360Brew engine has performed its intensive, prompt-based analysis and returned a relevance score for each of the thousands of candidate posts, the core "thinking" is done. The system now has a ranked list, ordered from what the AI predicts will be most valuable to you down to the least. However, the process is not yet complete. If the system simply took the top-scoring posts and delivered them directly to your screen, the result might be highly relevant, but it could also be monotonous, repetitive, or unbalanced. You might see five posts in a row from the same hyperactive person in your network, or a feed entirely dominated by a single trending topic. A purely relevance-driven feed is not necessarily a healthy or engaging one. This final step is about applying a layer of editorial judgment and platform-wide rules to this ranked list. It’s the stage where the raw, mathematical output of the AI is refined to create a balanced, diverse, and safe user experience. This involves applying final business rules, ensuring feed diversity, and preparing the content for final delivery to your specific device. Many of the principles from the old system's "Re-Ranking & Finalization" stage are still very much alive here, serving as essential guardrails for the powerful new engine. Learn more about Trust Insights at TrustInsights.ai p. 34​
Applying Final Business Rules: The Platform Guardrails ●​ What happens:​ Before the feed is shown, the top-ranked list of posts from 360Brew goes through a final, rapid series of automated checks. These are not about re-evaluating relevance but about enforcing platform-wide business rules and policies. This is a critical layer of governance that ensures the feed adheres to both community standards and a good user experience.​ This stage includes several key filters: ○​ Trust & Safety Moderation: This is the most important guardrail. Every piece of content is checked against LinkedIn's professional community policies. Automated systems, and in some cases human reviewers, work to identify and remove content that violates these policies, such as misinformation, hate speech, or spam. Even if a post scores highly for relevance with 360Brew, it will be removed at this stage if it is flagged by the Trust & Safety systems. ○​ Impression Discounting: The system keeps a memory of what you've recently seen. If you've already seen a particular post (i.e., it was rendered on your screen during a previous session), its score will be heavily discounted or it will be removed entirely from the list for your next feed refresh. This is to prevent you from seeing the same content over and over again. ○​ Frequency Capping (Anti-Gaming Rules): This is a crucial rule to prevent a single person or topic from dominating your feed. The system applies rules like, "Do not show a member more than X posts from the same author in a single feed session," or "Ensure there is a minimum gap between posts on the same viral topic." This prevents your feed from being flooded by a single, prolific creator or a single news event, even if their individual posts are all scoring highly. ○​ Block Lists & Mutes: This filter respects your personal preferences. If you have blocked a member, muted them, or unfollowed them, their content is explicitly removed from your feed at this stage, regardless of its relevance score. ●​ So what?:​ This means that pure, raw relevance is not the only factor that determines what you see. LinkedIn actively intervenes to shape the final feed for health, safety, and a good user experience. The platform is making an editorial judgment that a balanced and safe feed is more valuable in the long run than a feed that is simply a firehose of the highest-scoring content. This also means there are hard limits to visibility. No Learn more about Trust Insights at TrustInsights.ai p. 35​
matter how great your content is, you cannot brute-force your way into a user's feed ten times in a row. The system is explicitly designed to prevent that. ●​ Now what?:​ While you cannot directly influence these rules, you can align your strategy with their intent, which is to foster a healthy and diverse professional community. ○​ Post Consistently, Not Repetitively: Maintain a good posting cadence to stay top-of-mind, but avoid posting so frequently that you trigger frequency caps for your most engaged followers. Blasting out five posts in a single hour is more likely to get your later posts suppressed than to increase your overall reach. Space out your valuable content. ○​ Vary Your Content: If you post often, try to vary your topics and formats. This not only keeps your content fresh for your audience but also makes it less likely to be flagged by anti-gaming rules designed to prevent repetitive content. A mix of text posts, articles, videos, and shares is healthier than a monolithic stream of the same type of update. ○​ Play the Long Game: Understand that the system is designed to provide a good experience over weeks and months, not just in a single session. Building a loyal following who finds your content consistently valuable is a more durable strategy than trying to create a single viral hit that might get throttled by the system's guardrails anyway. ○​ Always Adhere to Professional Community Policies: This should go without saying. The fastest way to have zero visibility is to create content that violates LinkedIn's rules. Professionalism, respect, and authenticity are the price of admission. Ensuring Feed Diversity: From Manual Rules to Automated Curation ●​ What happens:​ Beyond the hard-coded business rules, the system also works to ensure the feed is topically and structurally diverse. The old system accomplished this primarily through rigid, rule-based re-rankers. For example, a rule might have stated, "Ensure a minimum gap of two items between any 'out-of-network' posts."​ The new ecosystem, powered by models like 360Brew and its predecessors like LiGR, can handle this in a much more intelligent and automated way. The latest research papers describe a move towards setwise ranking.​ Instead of evaluating each post in isolation (pointwise ranking), a setwise model Learn more about Trust Insights at TrustInsights.ai p. 36​
looks at the top-ranked posts as a group, or a "set." It can see the top 10 or 20 posts that are likely to be shown and can ask questions like: ○​ "Are too many of these posts from the same author?" ○​ "Are all of these posts about the same trending topic?" ○​ "Does this set contain a good mix of content formats (text, video, articles)?" ●​ The model can then adjust the scores, perhaps down-ranking a post that is too similar to another, higher-ranked post, or boosting a post that adds unique value or a different perspective to the set. This allows the system to learn what a "good" slate of content looks like for each member, rather than relying on one-size-fits-all rules. For example, it might learn that you prefer to see a few posts about your core industry, followed by one about a secondary interest, and another that is a poll or a question to engage with. It can then curate the feed to match this learned preference for diversity. ●​ So what?:​ This means that the success of your post can be affected by the other content that is ranking highly for a member at that moment. Even if your post scores highly on its own, its chances can be boosted or reduced based on the context of the entire feed session. Uniqueness and complementary value matter. If ten other experts in your field have just posted about the same breaking news, your own post on that topic, even if it's excellent, might be down-ranked for a particular user in favor of a post on a different, valuable topic. Conversely, if your post offers a unique angle or covers an underserved topic, it might be boosted to add diversity to the feed. ●​ Now what?:​ You can't control what other content is ranking, but you can control the uniqueness and value proposition of your own content. ○​ Offer a Unique Angle: When commenting on a trending topic, don't just regurgitate the same talking points. Try to provide a unique perspective, a piece of data no one else has, or a contrarian viewpoint. This makes your content a "diversity candidate," increasing its chances of being selected to balance a feed that might otherwise be monotonous. ○​ Develop Your Niche: As discussed before, focusing on a specific niche is a powerful strategy. It not only helps you build a dedicated audience but also makes your content a valuable source of diversity for the system. Your deep expertise on a specific topic is a unique asset that the setwise ranker can use to create a more interesting and valuable feed for members interested in that niche. ○​ Consider the "Feed Mix": While you can't predict the feed, be aware of the general conversations happening in your industry. If everyone is talking about Topic A, that might be the perfect time to publish your thoughtful Learn more about Trust Insights at TrustInsights.ai p. 37​
piece on Topic B. Your content might stand out not just to users, but to the setwise ranker itself. Delivery: Formatting for the Final Destination ●​ What happens:​ In the final micro-seconds of the process, the curated, ranked, and finalized list of posts is handed over to the delivery systems. This stage is responsible for formatting the content for your specific device—whether it's a web browser on a large monitor, an iOS app, or an Android device. Specialized "Render Models" take the raw content and prepare it for display, ensuring that text wraps correctly, images are sized appropriately, and videos are ready to play. The formatted feed is then sent to your device and rendered on your screen. The content has made it! ●​ So what?:​ The system is optimizing not just for relevance, but for a good consumption experience on every platform. This is a subtle but important final step. A post that is difficult to read on a mobile device, for example, will likely have lower engagement, which feeds back into the system as a negative signal over time. ●​ Now what?:​ This is the simplest step to align with, but one that is often overlooked. Always design your content to be easily consumable on mobile devices, as this is where a majority of users interact with the feed. ○​ Use Short Paragraphs: Break up large walls of text. One or two sentences per paragraph is a good rule of thumb. ○​ Check Your Visuals: If you are creating an image or a carousel with text, make sure the font is large and legible enough to be read easily on a small phone screen. ○​ Write Concise Video Hooks: The first few seconds of a video are critical. On mobile, users scroll quickly. Your opening must grab their attention immediately, with or without sound (so use captions!). From the raw power of 360Brew's predictions to the final, refined list that appears on your phone, this finalization stage is a crucial part of the process. It's where the platform's broader goals—community health, user experience, and safety—are layered on top of pure relevance. By understanding and aligning with these goals, you move from simply creating content to being a valuable and trusted contributor to the entire professional ecosystem. Learn more about Trust Insights at TrustInsights.ai p. 38​
Learn more about Trust Insights at TrustInsights.ai p. 39​
LinkedIn Profile Checklist for Marketers & Creators The previous sections have detailed the profound shift in how LinkedIn's AI works. We've moved from a world of numerical signals to a world of natural language, from a feature factory to a reasoning engine. Now, we translate that understanding into action. These checklists have been completely revised to align with this new paradigm. They are your practical, step-by-step guides to providing the highest-quality raw materials for the AI's prompt engineering system. The new guiding principle is simple: Communicate your value with clarity, because a powerful AI is now your primary audience. New Guiding Principle: Your profile is no longer just a source for abstract features; it is the raw, foundational text that forms the "Member Profile" section of the AI's prompt. Every ranking decision for or against your content begins with the AI reading this document. Because of the "Lost-in-Distance" challenge, the information at the top of your profile—your photo, background, and especially your headline—is the most influential. A clear, compelling, and keyword-rich narrative in this section directly and powerfully impacts the AI's understanding of who you are, what you know, and who needs to see your content. 1. Profile Photo & Background Photo ●​ Why it Matters in the 360Brew Era: While the language model itself doesn't "see" your photo in the traditional sense, these visual elements are crucial trust and engagement signals for the humans who ultimately interact with your content. The 360Brew engine is designed to predict human behavior. A profile with a professional, high-quality photo is more likely to be trusted and engaged with by other members. This positive human engagement then becomes powerful "Past Interaction Data" that feeds the In-Context Learning mechanism. A strong photo leads to better human signals, which in turn leads to a stronger case in future prompts. ●​ What to do: Use a clear, professional headshot and a relevant, high-quality background photo. ●​ How to do it: ○​ Profile Photo: Learn more about Trust Insights at TrustInsights.ai p. 40​
■​ Use a high-resolution, well-lit photo where your face is clearly visible. ■​ Dress professionally, consistent with your industry and role. ■​ Ensure the background is simple and not distracting. ■​ Use a real photo. Systems are increasingly adept at detecting AI-generated or fake images, which can be a negative trust signal. ○​ Background Photo: ■​ Use a high-quality image (1584 x 396 pixels is ideal). ■​ Reflect your personal brand, company, industry, or a key professional achievement. ■​ If you use text, ensure it's legible on both desktop and mobile devices without being cut off. 2. Headline ●​ Why it Matters in the 360Brew Era: Your headline is the single most important line of text on your profile. It is the title of your professional dossier. Due to the "Lost-in-Distance" effect, information at the top of the context has the most weight. Your headline is almost certainly the first piece of your profile that is rendered into the Member Profile section of the prompt. It sets the entire context for how the AI interprets everything else about you. A powerful headline primes the model to see you as an expert. ●​ What to do: Craft a concise, keyword-rich headline (up to 220 characters) that clearly states who you are, what you do, and the value you bring. ●​ How to do it: ○​ Front-load Your Keywords: Place your 2-3 most important keywords or titles at the very beginning. "B2B SaaS Content Strategist | AI in Marketing" is immediately understood. ○​ State Your Value Proposition: Briefly explain the problem you solve or the value you create. Example: "Helping enterprise tech companies build their content engine." This gives the language model rich, conceptual context. ○​ Use the Language of Your Audience: Think about the terms your ideal connections or clients would search for. Use that language in your headline. This helps the Cross-Domain GNN in the Candidate Generation stage connect you to the right "graph neighborhood." Learn more about Trust Insights at TrustInsights.ai p. 41​
○​ Keep it Updated: If your professional focus or key skills shift, update your headline immediately. It's the most influential part of your real-time professional identity. 3. About (Summary) Section ●​ Why it Matters in the 360Brew Era: If your headline is the title, your About section is the executive summary of your professional dossier. This is the largest block of narrative text the model has to learn from. It reads this section to understand the story behind your skills, the context of your achievements, and your professional "why." A well-written summary provides a rich, conceptual understanding that goes far beyond simple keywords, allowing the model to make more nuanced and accurate connections. ●​ What to do: Write a compelling, detailed summary that tells your professional story, weaving in your key skills, achievements, and goals naturally. ●​ How to do it: ○​ Start with a Strong Opening Paragraph: Just like your headline, front-load the value. Your first paragraph should summarize your core expertise and value proposition. ○​ Tell a Story with Keywords: Don't just list skills. Weave them into the narrative of your accomplishments. Instead of "Skills: SEO," write "I led the SEO strategy that resulted in a 300% increase in organic traffic for our flagship product." The model understands and values context and results. ○​ Quantify Your Achievements: Numbers are a universal language, even for an LLM. Quantifying your accomplishments ("managed a team of 10," "grew revenue by $5M") provides concrete, verifiable data points that signal impact and credibility. ○​ Mention Key "Entities": Naming notable companies you've worked with, technologies you've used, or significant projects you've led helps the system link your profile to other important nodes in the Economic Graph. 4. Experience Section ●​ Why it Matters in the 360Brew Era: The Experience section is the evidence that backs up the claims in your headline and summary. Each job description is parsed Learn more about Trust Insights at TrustInsights.ai p. 42​
as text, providing the model with a chronological narrative of your career progression and the specific context in which you applied your skills. This detailed history allows the model to reason about the depth of your expertise. ●​ What to do: Detail each role with achievement-oriented descriptions, using industry-standard language and keywords. ●​ How to do it: ○​ Link to Official Company Pages: Always link your role to the correct, official LinkedIn Company Page. This creates a clean, unambiguous link in the Economic Graph. ○​ Use Precise Titles and Dates: Use your exact job title and accurate employment dates. This helps the model build a clear timeline of your career trajectory. ○​ Focus on Achievements, Not Just Responsibilities: Use bullet points to describe your accomplishments in each role. Instead of "Responsible for social media," write "Grew our social media following by 50,000 and increased engagement by 25% in one year." Use the STAR method (Situation, Task, Action, Result) to frame your accomplishments. This provides rich, structured information that the AI can easily parse. ○​ Embed Relevant Skills in Each Role: Naturally weave the specific skills and keywords relevant to each job into its description. This shows the model when and where you applied your expertise. 5. Skills Section (Endorsements & Skill Badges) ●​ Why it Matters in the 360Brew Era: The Skills section provides structured, verifiable data points that complement the narrative of your profile. While 360Brew is language-first, it still benefits from these explicit signals. Endorsements from other skilled professionals and Skill Badges from LinkedIn assessments serve as powerful, third-party validation of your claims. This is corroborating evidence for the AI. ●​ What to do: Curate a comprehensive list of your most relevant skills, seek endorsements for them, and complete LinkedIn Skill Assessments where possible. ●​ How to do it: ○​ Pin Your Top 3 Skills: Place your most critical, relevant skills at the top so they are immediately visible. Learn more about Trust Insights at TrustInsights.ai p. 43​
○​ Use Standardized Skill Terms: As you type, LinkedIn will suggest standardized skills. Use them. This maps your profile cleanly to the canonical "Skills" nodes in the Economic Graph. ○​ Seek Strategic Endorsements: Ask connections who have direct knowledge of your work to endorse your key skills. An endorsement from another expert in that same skill carries more weight. ○​ Earn Skill Badges: Passing a LinkedIn Skill Assessment adds a "verified" credential to your profile. This is a very strong, credible signal to both humans and the AI. 6. Recommendations ●​ Why it Matters in the 360Brew Era: Recommendations are the qualitative, third-party testimonials in your professional dossier. The 360Brew engine reads the full text of recommendations you've given and received. A well-written recommendation from a respected person in your network provides powerful social proof and rich semantic context about your skills and work ethic. The identity of the recommender also strengthens your connection to them in the Economic Graph. ●​ What to do: Request and give thoughtful, specific recommendations that highlight key skills and impactful achievements. ●​ How to do it: ○​ Guide Your Recommenders: When asking for a recommendation, don't just send a generic request. Politely suggest the specific project or skills you'd like them to highlight. ○​ Give Detailed, Valuable Recommendations: When recommending others, be specific. Mention the context of your work together, the skills they demonstrated, and the impact of their contribution. This not only helps them but also reflects positively on you as a thoughtful professional. 7. Education, Honors & Awards, Certifications, etc. ●​ Why it Matters in the 360Brew Era: These sections provide additional structured entities and keywords that enrich your profile's context. They are the credentials and accolades that round out your professional story. A certification from a recognized body (like Google, HubSpot, or PMI) or an award from a respected industry organization adds verifiable credibility. The AI can recognize these entities and understands the weight they carry. Learn more about Trust Insights at TrustInsights.ai p. 44​
●​ What to do: Thoroughly complete all relevant sections with accurate, specific, and official information. ●​ How to do it: ○​ Be Comprehensive: List your relevant degrees, certifications, publications, patents, and awards. ○​ Use Official Names: Use the exact official names for institutions, certifications ("Project Management Professional (PMP)"), and publications. Link to the issuing organization where possible. ○​ Use Description Fields: If a description field is available, use it to add context and relevant keywords. Explain what the project was about or what you learned in the certification course. By meticulously optimizing these sections, you are not just filling out a form. You are authoring the foundational document that the world's most advanced professional reasoning engine will use to understand you. You are crafting the narrative that becomes the very first part of every prompt, setting the stage for every ranking decision to come. Learn more about Trust Insights at TrustInsights.ai p. 45​
LinkedIn Content Pre-Launch Checklist for Creators New Guiding Principle: Your content is the "Question" the AI is asked to evaluate. Every time your post is considered for someone's feed, it becomes the central subject of a detailed, dynamically-generated prompt. The 360Brew reasoning engine reads your text from top to bottom, analyzing its quality, clarity, and conceptual relevance. It then compares this "Question" against the "Member Profile" and "Past Interaction Data" to predict a reaction. The system is performing a sophisticated act of matchmaking, attempting to align the language, concepts, and ideas in your content with the demonstrated interests and expertise of each member. Creating content that is easy for a powerful AI to understand, contextualize, and see value in is the new key to visibility. I. Before You Post: Content Strategy & Creation This phase is about making sure the "Question" you're about to ask the AI is a good one. A muddled, low-value, or poorly targeted post is like asking a nonsensical question—it's unlikely to get a favorable response. 1. Topic Selection & Conceptual Alignment ●​ Why it Matters in the 360Brew Era: The 360Brew engine thinks in concepts, not just keywords. It understands the semantic relationships between topics. For example, it knows that "go-to-market strategy," "product-led growth," and "customer acquisition cost" are all related concepts within the domain of B2B marketing. Selecting a topic that aligns with your core expertise (as stated in your profile) and the interests of your target audience creates a powerful "conceptual resonance." When the AI reads a prompt where the concepts in your Profile, the member's History, and your new Content all align, it's a very strong signal of relevance. ●​ What to do: Strategically choose topics that create a strong conceptual link between your established expertise and your audience's needs. ●​ How to do it: ○​ Identify Audience Pain Points: What are the key challenges, questions, and goals of your target audience? Frame your topics around providing solutions, insights, or new perspectives on these specific pain points. ○​ Find Your Niche Intersection: The most powerful content lives at the intersection of three things: Your deep expertise, your audience's needs, and Learn more about Trust Insights at TrustInsights.ai p. 46​
a unique perspective. Don't just talk about "AI in Marketing." Talk about "How Mid-Sized B2B SaaS Companies Can Use AI to Automate Competitive Analysis." This specificity is easily understood by the reasoning engine. ○​ Align with Your Profile: The topics of your posts should be a direct reflection of the expertise you claim in your headline and About section. If your profile says you're a cybersecurity expert, your content should be about cybersecurity. This consistency creates a coherent narrative that the AI can easily understand and trust. 2. Content Format Selection ●​ Why it Matters in the 360Brew Era: Different formats are optimized to generate different types of engagement signals, which in turn become different types of "Past Interaction Data." The system learns which formats your audience prefers and which formats are best for certain topics. A video, for example, is excellent for generating "long dwell time," while a poll is designed for rapid, low-friction interaction. Choosing the right format for your message helps you elicit the type of engagement that best signals value. ●​ What to do: Choose a content format that best suits your message and is known to engage your target audience. Experiment to see what resonates. ●​ How to do it: ○​ Text Posts: Ideal for focused insights, asking questions, or starting discussions. Because the text is the primary input for the LLM, well-written, well-structured text posts are incredibly powerful. ○​ Articles/Newsletters: Best for establishing deep expertise. The long-form text provides a rich, dense source of conceptual information for the AI. A high-quality article becomes a cornerstone piece of evidence for your authority on a topic. ○​ Images/Carousels: Excellent for making complex information digestible. Use high-quality visuals and ensure any text is legible on mobile. Provide descriptive alt-text and a strong introductory paragraph; this text is the primary context the AI will read. ○​ Native Video: Great for building personal connection and capturing attention. Keep them concise and add captions. The system can process the transcript of your video, so what you say is just as important as what you show. Learn more about Trust Insights at TrustInsights.ai p. 47​
○​ Polls: Perfect for generating quick, broad engagement. While a lower-intent signal, a successful poll can significantly increase your content's initial velocity, helping it pass the Candidate Generation stage. 3. Crafting High-Quality, Engaging Content ●​ Why it Matters in the 360Brew Era: This is the most critical step. The 360Brew engine is, at its core, a language model. It is trained to recognize and value high-quality, well-structured, and coherent text. Typos, grammatical errors, rambling sentences, and logical fallacies are not just cosmetic issues; they are signals of low-quality content that the model can now detect. A well-argued, insightful, and clearly written post is inherently optimized for a system designed to understand language. ●​ What to do: Create content that is valuable, insightful, well-structured, and encourages meaningful interaction. Write for an intelligent human, and you will be writing for the AI. ●​ How to do it: ○​ Hook Attention Immediately: The "Lost-in-Distance" challenge applies to your content, too. The first sentence of your post is the most important. It must grab attention and clearly state the value proposition to prevent a "scroll-past" (which is a negative signal). ○​ Structure Your Argument: Use formatting—bolding, bullet points, short paragraphs—to structure your content. This makes it easier for both humans and the AI to parse your main points and follow your logic. ○​ Provide Genuine Value First: Your primary goal should be to educate, inform, or inspire. Authentic, valuable content tends to resonate more deeply and generate higher-intent engagement signals (comments, shares). ○​ Encourage Discussion: End your posts with an open-ended question. This explicitly invites comments. When a member comments, their response and your subsequent reply create a valuable "conversation thread" that signals to the system that your content is fostering a meaningful discussion. ○​ Proofread Meticulously: A post riddled with errors is a signal of low quality. Use a grammar checker or have a colleague review your content before posting. Professionalism in your prose matters. Learn more about Trust Insights at TrustInsights.ai p. 48​
II. As You Post: Optimizing for Discovery & Initial Engagement This phase is about ensuring your well-crafted content is packaged correctly for the system, making it as easy as possible for the prompt engineer to understand its context and for the GNN to connect it to the right audience. 4. Writing Compelling Copy & Headlines ●​ Why it Matters in the 360Brew Era: The text of your post (and the headline for an article) is the literal Question being fed to the 360Brew engine. Clear, engaging copy with relevant conceptual language not only attracts human attention but also makes the AI's comprehension task easier and more accurate. A strong opening stops the scroll, influencing implicit signals like dwell time. ●​ What to do: Craft clear, concise, and compelling text that includes relevant concepts and encourages viewers to engage further. ●​ How to do it: ○​ Strong Opening: Make the first one or two sentences captivating. They should summarize the core value and create curiosity. ○​ Incorporate Concepts Naturally: Weave in the 1-3 primary concepts your audience would associate with the topic. Don't "keyword stuff"; think about expressing the core ideas. Instead of listing "SEO, SEM, PPC," write about "building a holistic search engine presence." The model understands the connection. ○​ Clear Call to Action (CTA): What do you want people to do? A direct CTA like "What are your thoughts?" or "Share your experience in the comments" explicitly frames the post as a conversation starter. 5. Strategic Use of Hashtags ●​ Why it Matters in the 360Brew Era: Hashtags are explicit categorization signals. They are structured metadata that helps the Candidate Generation GNN quickly understand the primary topic of your post and connect it to broader conversations and interest groups. While 360Brew can infer topics from your text, hashtags provide a clean, unambiguous signal that removes any guesswork. ●​ What to do: Use a small number of highly relevant hashtags that mix broad and niche topics. ●​ How to do it: ○​ Use 3-5 Relevant Hashtags: This is generally a good range. Too many can look spammy and dilute the signal. Learn more about Trust Insights at TrustInsights.ai p. 49​
○​ Mix Broad and Niche: Use one or two broad hashtags (e.g., #marketing, #leadership) for wider discovery and two or three niche hashtags (e.g., #productledgrowth, #b2bsaas) to attract a more specific, high-intent audience. ○​ Avoid Irrelevant Hashtags: Using a popular but irrelevant hashtag to try and "hack" reach is now more likely to harm you. The language model can see the mismatch between your content's text and the hashtag, which can be interpreted as a low-quality or spam signal. 6. Tagging Relevant People & Companies (When Appropriate) ●​ Why it Matters in the 360Brew Era: Tagging is another form of explicit, structured metadata. It creates a direct "edge" in the Economic Graph between your post and the person or company you tag. This strengthens the signals for the Candidate Generation GNN, potentially increasing your post's reach into the tagged entity's network. It also triggers a notification, encouraging initial engagement. ●​ What to do: Tag individuals or companies only when they are genuinely relevant to the content. ●​ How to do it: ○​ Relevance is Key: Tag people you are referencing, quoting, or collaborating with. Tag companies you are analyzing or celebrating. Do not tag a list of 20 influencers just for visibility. This is perceived as spam by both users and the system. ○​ Notify & Engage: If you tag someone, they are notified. This is a powerful way to spark initial engagement if they find the content valuable and relevant, which in turn can bootstrap the post's velocity. III. After You Post: Fostering Engagement & Learning This phase is about capitalizing on the initial visibility your post receives and feeding the best possible signals back into the system's learning loop. 7. Engaging with Comments Promptly & Thoughtfully ●​ Why it Matters in the 360Brew Era: Comments are one of the most powerful forms of "Past Interaction Data." When someone comments on your post, and you reply, you are creating a rich, conversational thread. The text of this entire conversation Learn more about Trust Insights at TrustInsights.ai p. 50​
can become context in future prompts. It signals to the system that your content is not a monologue but a catalyst for valuable professional discussion. This is a very high-quality signal. ●​ What to do: Monitor your posts and respond to comments in a timely and thoughtful manner. ●​ How to do it: ○​ Acknowledge All Comments: Even a simple "Thanks for sharing your perspective!" can be valuable. ○​ Answer Questions: If people ask questions, provide helpful, detailed answers. This further demonstrates your expertise. ○​ Ask Follow-up Questions: Keep the conversation going. Your replies are as much a part of the content as your original post. ○​ Foster Respectful Debate: If there are differing opinions, facilitate a professional and respectful discussion. A healthy debate is a sign of a highly engaging post. By following this comprehensive checklist, you are systematically creating and packaging your content in a way that is perfectly aligned with how a large language model thinks. You are making it easy for the AI to understand your expertise, see the value in your content, and match it with the right audience. Learn more about Trust Insights at TrustInsights.ai p. 51​
LinkedIn Engagement Checklist for Marketers and Creators New Guiding Principle: Your activity (likes, comments, shares) is the raw material for the "Past Interaction Data" section of the AI's prompt. Every engagement you make is no longer just a passive vote; it is an active contribution to the live, personalized briefing document that the 360Brew engine reads to understand you. Strategic engagement is the act of deliberately curating this data set. You are providing the real-time examples that the model uses for In-Context Learning, effectively teaching it what you value, who you are, and what conversations you belong in. A high-quality engagement history leads to a powerful, persuasive prompt and, consequently, a more relevant and valuable feed experience. I. Quick Daily Engagements (5-15 minutes per day) These are small, consistent actions that keep your "Past Interaction Data" fresh and aligned with your goals. Think of this as the daily maintenance of your professional identity signal. 1. Reacting Strategically to Relevant Feed Content ●​ Why it Matters in the 360Brew Era: Each reaction (Like, Celebrate, Insightful, etc.) you make is an explicit data point that gets logged and is eligible for inclusion in future prompts. When the prompt engineer assembles your Past Interaction Data, it might include a line like: "Member has liked the following posts: [Content of Post X]..." A reaction is a direct, unambiguous way of telling the system, "This content is relevant to me." Reacting to content from your target audience or on your core topics reinforces your position within that "conceptual neighborhood," strengthening the signals for both the Candidate Generation GNN and the 360Brew reasoning engine. ●​ What to do: Quickly scan your feed and thoughtfully react to 3-5 posts that are highly relevant to your expertise, industry, or target audience. ●​ How to do it: ○​ Prioritize Relevance over Volume: Focus on reacting to posts from key connections, industry leaders, and on topics central to your brand. A single reaction on a highly relevant post is a better signal than 20 reactions on random content. Learn more about Trust Insights at TrustInsights.ai p. 52​
○​ Use Diverse Reactions for Nuance: Don't just "Like" everything. Using "Insightful" on a data-driven post or "Celebrate" on a colleague's promotion provides a richer, more nuanced signal. While it's not explicitly stated how each reaction is weighted, it provides more detailed semantic information for the model to potentially learn from. ○​ Avoid Indiscriminate Reacting: Mass-liking dozens of posts in a few minutes can dilute the signal of your true interests. It creates a noisy "Past Interaction Data" set, making it harder for the prompt engineer to identify what you genuinely find valuable. Be deliberate. 2. Brief, Insightful Comments on 1-2 Key Posts ●​ Why it Matters in the 360Brew Era: A comment is one of the most powerful signals you can create. It is a high-intent action that generates rich, textual data. When you comment, two things happen: ○​ Your action is logged for your own Past Interaction Data: "Member has commented on the following posts: [Content of Post Y]..." ○​ The text of your comment itself becomes associated with your professional identity. The 360Brew engine can read your comment and use its content to refine its understanding of your expertise and perspective.​ Leaving a relevant, insightful comment on another expert's post is like co-authoring a small piece of content with them. It explicitly links your identity to theirs in a meaningful, conceptual way. ●​ What to do: Identify 1-2 highly relevant posts in your feed and add a brief, thoughtful comment that contributes to the discussion. ●​ How to do it: ○​ Add Value, Don't Just Agree: Instead of just writing "Great post!", expand on a point, ask a clarifying question, or share a brief, related experience. This provides unique text for the AI to analyze. ○​ Use Relevant Concepts Naturally: Your comment text becomes a signal of your expertise. If you're a cybersecurity expert, commenting with insights about "zero-trust architecture" on a relevant post reinforces your authority on that topic. ○​ Be Timely: Commenting on fresher posts often yields more visibility and is more likely to be part of the "recency-weighted" history construction for others who see that post. ○​ Keep it Professional and Constructive: Your comments are a permanent part of your professional record, readable by both humans and the AI. Learn more about Trust Insights at TrustInsights.ai p. 53​
II. Focused Daily/Regular Engagements (15-30 minutes per day/several times a week) These activities require a bit more effort but create stronger, more durable signals that can significantly shape the AI's perception of your professional identity. 3. Participating Actively in 1-2 Relevant LinkedIn Groups ●​ Why it Matters in the 360Brew Era: Group activity is a powerful signal of deep interest in a specific niche. Your interactions within a group—the posts you share, the questions you answer—provide a concentrated stream of topically-aligned "Past Interaction Data." This makes it incredibly easy for the similarity-based history constructor to find strong, relevant examples. For the AI, your active participation in "The Advanced Product Marketing Group" is a powerful piece of evidence that you are, in fact, an expert in product marketing. ●​ What to do: Identify and actively participate in 1-2 LinkedIn Groups that are highly relevant to your industry, expertise, or target audience. ●​ How to do it: ○​ Share Valuable Content: Post relevant articles, insights, or questions within the group. This establishes you as a contributor. ○​ Engage with Others' Posts: Like, comment, and answer questions in group discussions. This creates a rich trail of high-intent, topically-focused engagement signals. ○​ Choose Active, Well-Moderated Groups: The quality of the conversation matters. A well-moderated group provides higher-quality context for the AI to learn from. 4. Sending Personalized Connection Requests ●​ Why it Matters in the 360Brew Era: Expanding your relevant network strengthens your position in the Economic Graph, which is a key input for the Candidate Generation GNN. An accepted connection request is a strong positive signal. A personalized request is more likely to be accepted and can even become a piece of textual data itself (though it's private). More importantly, the people you connect with become a primary source of content and context. Engaging with their content is what builds out your Past Interaction Data. Learn more about Trust Insights at TrustInsights.ai p. 54​
●​ What to do: Send a few targeted, personalized connection requests each week to individuals relevant to your professional goals. ●​ How to do it: ○​ Always Add a Personal Note: Explain why you want to connect. Reference a shared interest, a recent post they wrote, or a mutual connection. This dramatically increases the acceptance rate. ○​ Focus on Mutual Value: Think about what value the connection might bring to them as well. Networking is a two-way street. ○​ Connect with People Who Engage with Your Content: If someone consistently likes or comments on your posts, they are an ideal candidate for a connection request. They have already demonstrated an interest in your expertise. III. More Involved Weekly/Bi-Weekly Engagements (30-60+ minutes per session) These are high-effort, high-impact activities that create cornerstone assets for your professional identity. They provide the richest, densest sources of textual data for the 360Brew engine to analyze. 5. Writing and Publishing LinkedIn Articles or Newsletters ●​ Why it Matters in the 360Brew Era: A long-form article or newsletter is the ultimate high-quality data source. The 360Brew engine is a language model; giving it a well-structured, 1,000-word article on your core area of expertise is like handing it a detailed research paper for its dossier on you. This becomes a powerful, permanent "node" in the Economic Graph that is rich with conceptual information. When the system evaluates your future, shorter posts, it can reference its deep understanding of your expertise from your articles. A successful newsletter also attracts subscribers, a very strong signal of audience validation. ●​ What to do: If you have in-depth insights to share, consider publishing LinkedIn Articles or starting a Newsletter on a topic relevant to your expertise and target audience. ●​ How to do it: ○​ Choose a Niche Focus: Consistency is key. A newsletter that consistently delivers value on a specific topic will build a loyal audience and create a coherent body of work for the AI to analyze. Learn more about Trust Insights at TrustInsights.ai p. 55​
○​ Provide Substantial Value: Articles should offer deep insights, comprehensive guides, or unique perspectives. This is your chance to prove your expertise, not just state it. ○​ Optimize for Readability: Use headings, subheadings, bullet points, and images to break up the text. ○​ Engage with Comments: Foster a discussion on your published pieces. The conversation in the comments is an extension of the article itself. 6. Reviewing and Endorsing Skills for Connections ●​ Why it Matters in the 360Brew Era: Endorsing a skill for a connection is a structured data signal that reinforces the Economic Graph. While primarily benefiting the person you endorse, this reciprocal activity also signals your own areas of expertise and your engagement within your professional community. It tells the system, "I am a professional in this domain, and I am qualified to validate the skills of others." It's a subtle but valuable form of demonstrating your own standing. ●​ What to do: Periodically review connection profiles and endorse skills for which you can genuinely vouch. ●​ How to do it: ○​ Be Authentic: Only endorse skills you know the person possesses. ○​ Focus on Key Skills: Prioritize endorsing the most relevant and important skills for your connections. ○​ Reciprocity Often Occurs: Connections you endorse may be more likely to endorse you back, further strengthening your own profile. By consistently applying these engagement strategies, you are actively and deliberately curating the data set that defines you to the LinkedIn AI. You are moving from being a passive subject of an algorithm to an active participant in a conversation with a reasoning engine. This isn't about "being active" for the sake of it; it's about strategic, relevant, and valuable interactions that provide the clearest possible context for the AI to understand your professional identity and amplify your voice. Learn more about Trust Insights at TrustInsights.ai p. 56​
LinkedIn Newsfeed Technologies This section provides a granular, technical outline of the LinkedIn newsfeed generation architecture, synthesized from publicly available research papers and engineering blogs. It details the specific systems, models, and data flows involved in the end-to-end process, from offline model training to real-time content delivery. I. Offline Ecosystem: AI Asset Generation & Training The offline ecosystem is responsible for all large-scale data processing and model training. It operates on a cadence of hours to months, producing the versioned AI models and embeddings that are consumed by the online serving systems. ●​ A. Pipeline Orchestration & Execution Environment: ○​ 1.1. Orchestration Platform (OpenConnect): A platform built on Flyte for defining, executing, and managing all AI/ML workflows. Replaces the legacy ProML ecosystem. ○​ 1.2. Dependency Management: Utilizes Docker containers and resource manifests to decouple component dependencies, enabling rapid iteration and eliminating full-workflow rebuilds for minor changes. ○​ 1.3. Compute Environment: Multi-region, multi-cluster Kubernetes setup with a global scheduler for intelligent routing based on data locality and resource availability (CPU/GPU). ○​ 1.4. Distributed Training Frameworks: Primarily PyTorch Fully Sharded Data Parallel (FSDP) for large model training. Utilizes Horovod for certain distributed tasks. ○​ 1.5. Resilience: Employs active checkpointing and automated job retries via Flyte to handle node maintenance and infrastructure disruptions, reducing training failures by a reported 90%. ●​ B. Core Foundation Model Training (360Brew): ○​ 2.1. Base Model: Mixtral 8x22B, a decoder-only Mixture-of-Experts (MoE) Transformer architecture. ○​ 2.2. Training Stage 1: Continuous Pre-Training (CPT): Further pre-training of the base model on trillions of tokens of verbalized, first-party LinkedIn data (member profiles, interactions, Economic Graph data) to imbue it with domain-specific knowledge. Learn more about Trust Insights at TrustInsights.ai p. 57​
○​ 2.3. Training Stage 2: Instruction Fine-Tuning (IFT): Fine-tuning on a blend of open-source and proprietary instruction datasets using preference alignment algorithms like DPO (Direct Preference Optimization) to enhance instruction-following and zero-shot reasoning capabilities. ○​ 2.4. Training Stage 3: Supervised Fine-Tuning (SFT): Fine-tuning on millions of labeled examples in a Multi-Turn Chat (MTC) format to learn specific ranking and recommendation tasks. The loss function is a weighted combination of prompt loss and masked MTC loss. ○​ 2.5. Final Artifact: A frozen, versioned 360Brew Foundation Model (150B+ parameters) is produced. ●​ C. Ancillary Model Training & Asset Generation: ○​ 3.1. Candidate Generation Model (Cross-Domain GNN): A Graph Neural Network trained on a unified, heterogeneous graph that consolidates data from multiple domains (Feed, Jobs, Notifications, Email). Produces a model capable of generating holistic, cross-domain member embeddings. ○​ 3.2. Efficient Model Generation (SLMs): ■​ 3.2.1. Knowledge Distillation: A process where the large 360Brew model (teacher) is used to train a smaller, more efficient model (student), often by minimizing the KL divergence between their output logits. ■​ 3.2.2. Structured Pruning: Utilizes algorithms like OSSCAR to perform one-shot or gradual pruning of MLP layers and attention heads, creating a smaller model (e.g., from 8B to 6.4B parameters) that is then fine-tuned via distillation to recover performance. ■​ 3.2.3. Final Artifact: Produces a set of versioned Small Language Models (SLMs) for deployment in highly latency-sensitive or cost-constrained scenarios. II. Real-Time Data Infrastructure This layer is responsible for capturing and serving the most recent member activity, which is essential for the In-Context Learning mechanism of the online systems. ●​ A. Event Streaming & Ingestion: ○​ 1.1. Event Bus: All client-side interactions (impressions, clicks, dwells, comments, shares) are published as events to a Kafka stream. ●​ B. Real-Time Data Storage & Serving: Learn more about Trust Insights at TrustInsights.ai p. 58​
○​ 2.1. In-Memory Datastore: The Kafka stream is consumed by real-time processing systems that write the recent interaction data into low-latency, key-value stores like Pinot or Venice. ○​ 2.2. Function: This datastore serves as the source of truth for the "Past Interaction Data" used by the Real-Time Prompt Assembler during online inference. III. Online Serving Funnel (Real-Time Inference) This is the end-to-end, sub-second process executed for every feed request. ●​ A. L0: Candidate Generation: ○​ 1.1. Graph-Based Retrieval: The pre-trained Cross-Domain GNN model is used to score potential candidates based on their relationship to the member's holistic identity within the Economic Graph. ○​ 1.2. Similarity-Based Retrieval: Utilizes Approximate Nearest Neighbor (ANN) search algorithms (e.g., FAISS, ScaNN) on pre-computed content embeddings to find items semantically similar to a member's interests. ○​ 1.3. Heuristic-Based Retrieval: A set of fast, rule-based systems that pull in candidates based on signals like timeliness (e.g., content from direct connections posted in the last hour) and engagement velocity. ○​ 1.4. Aggregation: Candidates from all sources are collected, de-duplicated, and passed to the next stage. Output is a longlist of several thousand candidate item IDs. ●​ B. L2: Ranking & Reasoning: ○​ 2.1. Real-Time Prompt Assembler: A service that, for each candidate item, constructs a unique, verbose prompt. ■​ 2.1.1. Process: It fetches the member's profile text, the candidate post's text, and makes a live call to the Pinot/Venice datastore to retrieve the member's most recent interactions. ■​ 2.1.2. History Construction Logic: Implements programmatic rules to mitigate the "Lost-in-Distance" problem. This can include similarity-based curation (prioritizing historical items similar to the candidate) and recency/priority weighting to structure the prompt for optimal performance. ○​ 2.2. Inference Serving Engine: ■​ 2.2.1. Framework: vLLM, an open-source serving framework for high-throughput LLM inference. Learn more about Trust Insights at TrustInsights.ai p. 59​
■​ 2.2.2. Core Technology: Utilizes PagedAttention to efficiently manage GPU memory and enable high-concurrency request batching. ■​ 2.2.3. Execution: The pre-compiled, frozen 360Brew or SLM model is loaded into the vLLM engine. The engine receives batches of dynamically generated prompts and performs inference. ■​ 2.2.4. Optimization: Employs Tensor Parallelism to distribute the model across multiple GPUs and uses FP8 quantization to increase inference speed and reduce memory footprint on compatible hardware (e.g., H100 GPUs). ○​ 2.3. Output: A list of candidate items, each with a high-precision relevance score from the foundation model. ●​ C. Finalization, Delivery & Feedback: ○​ 3.1. Business Rule Filters: A final post-processing step that applies non-relevance-based rules: ■​ Trust & Safety: Content moderation filters. ■​ Impression Discounting: Down-ranks or removes content the member has already seen. ■​ Frequency Capping: Applies rules to prevent author or topic saturation in the feed. ○​ 3.2. Diversity Modeling (Setwise Ranking): An optional layer that can re-rank the top-N candidates by evaluating them as a collective "set" to optimize for session-level diversity and coherence, replacing legacy rule-based diversity re-rankers. ○​ 3.3. Delivery: The final, ordered list of item IDs is sent to a Render Service which formats the content for the specific client (Web, iOS, Android) before delivering the final payload. ○​ 3.4. Feedback Loop: All final impressions and subsequent user interactions are logged and streamed back via Kafka to both the real-time and offline data systems, closing the loop for the next cycle of ICL and model training. Learn more about Trust Insights at TrustInsights.ai p. 60​
Advertisements In case you missed the ads up front. Ready to transform your AI marketing approach? Schedule a complimentary consultation to discuss how Trust Insights can help you apply these frameworks to your unique challenges and opportunities. Our team of experts will provide personalized guidance based on your current AI maturity, strategic priorities, and organizational context. Contact Trust Insights Today for your free consultation. Done By You services from Trust Insights: -​ Almost Timeless: 48 Foundation Principles of Generative AI: A non-technical AI book by cofounder and Chief Data Scientist Christopher Penn, Almost Timeless teaches you how to think about AI and apply it to your organization. Done With You services from Trust Insights: -​ AI-Ready Strategist: Ideal for CMOs and C-Suite leaders, the AI-Ready Strategist teaches you frameworks and methods for developing, deploying, and managing AI at any scale, from the smallest NGO to the largest enterprises with an emphasis on people, process, and governance. -​ Generative AI Use Cases for Marketers course: Learn the 7 major use case categories for generative AI in marketing with 21 different hands-on exercises, all data and prompts provided. -​ Mastering Prompt Engineering for Marketing course: Learn the foundation skills you need to succeed with generative AI including 3 major prompt frameworks, advanced prompting techniques, and how to choose different kinds of prompts based on the task and tool Done For You services from Trust Insights: -​ Customized consulting: If you love the promise of analytics, data science, and AI but don’t love the huge amount of work that goes into fulfilling that promise, from data governance to agentic AI deployment, let us do it for you. We’ve got more than a decade of real-world AI implementation (AI existed long before ChatGPT) built on your foundational data so you can reap the benefits of AI while your competitors are still figuring out how to prompt. Learn more about Trust Insights at TrustInsights.ai p. 61​
-​ Keynote talks and workshops: Bring Trust Insights to your event! We offer customized keynotes and workshops for conferences, company retreats, executive leadership meetings, annual meetings, and roundtables. Every full-fee talk is customized to your event, industry, or company, and you get the talk recording and materials (transcripts, prompts, data) for your audience to work with and learn from. Learn more about Trust Insights at TrustInsights.ai p. 62​
About TrustInsights.ai Trust Insights is a management consulting firm specializing in helping you turn data into results you care about. Whether it’s traditional analytics and data science or the latest innovations in machine learning and artificial intelligence, Trust Insights helps you achieve practical, beneficial outcomes instead of playing buzzword bingo. With a variety of services from training and education to done-for-you AI deployments, help for any of your data and insights needs is just a tap away. ●​ Learn more about Trust Insights: https://www.trustinsights.ai ●​ Learn more about Trust Insights AI Services: https://www.trustinsights.ai/aiservices Learn more about Trust Insights at TrustInsights.ai p. 63​
Methodology and Disclosures Sources Original Sources (Pre-360Brew Era) 1.​ Borisyuk, F., Zhou, M., Song, Q., Zhu, S., Tiwana, B., Parameswaran, G., Dangi, S., Hertel, L., Xiao, Q. C., Hou, X., Ouyang, Y., Gupta, A., Singh, S., Liu, D., Cheng, H., Le, L., Hung, J., Keerthi, S., Wang, R., Zhang, F., Kothari, M., Zhu, C., Sun, D., Dai, Y., Luan, X., Zhu, S., Wang, Z., Daftary, N., Shen, Q., Jiang, C., Wei, H., Varshney, M., Ghoting, A., & Ghosh, S. (2024). LiRank: Industrial large scale ranking models at LinkedIn. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24). Association for Computing Machinery. https://doi.org/10.1145/3637528.3671561 (Also arXiv:2402.06859v2 [cs.LG]) 2.​ Borisyuk, F., Hertel, L., Parameswaran, G., Srivastava, G., Ramanujam, S., Ocejo, B., Du, P., Akterskii, A., Daftary, N., Tang, S., Sun, D., Xiao, C., Nathani, D., Kothari, M., Dai, Y., & Gupta, A. (2025). From features to transformers: Redefining ranking for scalable impact. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '25)[Anticipated publication ]. (Also arXiv:2502.03417v1[cs.LG]) 3.​ Borisyuk, F., He, S., Ouyang, Y., Ramezani, M., Du, P., Hou, X., Jiang, C., Pasumarthy, N., Bannur, P., Tiwana, B., Liu, P., Dangi, S., Sun, D., Pei, Z., Shi, X., Zhu, S., Shen, Q., Lee, K.-H., Stein, D., Li, B., Wei, H., Ghoting, A., & Ghosh, S. (2024). LiGNN: Graph neural networks at LinkedIn. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24). Association for Computing Machinery. https://doi.org/10.1145/3637528.3671566 4.​ Zhang, F., Kothari, M., & Tiwana, B. (2024, August 7). Leveraging Dwell Time to Improve Member Experiences on the LinkedIn Feed. LinkedIn Engineering Blog. Retrieved from https://www.linkedin.com/blog/engineering/feed/leveraging-dwell-time-to-improv e-member-experiences-on-the-linkedin-feed (Original post: Dangi, S., Jia, J., Somaiya, M., & Xuan, Y. (2020, October 29). Understanding dwell time to improve LinkedIn feed ranking. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2020/understanding-feed-dwell-time) 5.​ Ackerman, I., & Kataria, S. (2021, August 19). Homepage feed multi-task learning using TensorFlow. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2021/homepage-feed-multi-task-learning-u Learn more about Trust Insights at TrustInsights.ai p. 64​
sing-tensorflow (Also: https://www.linkedin.com/blog/engineering/feed/homepage-feed-multi-task-lear ning-using-tensorflow) 6.​ Zhu, J. (S.), Ghoting, A., Tiwana, B., & Varshney, M. (2023, May 2). Enhancing homepage feed relevance by harnessing the power of large corpus sparse ID embeddings. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2023/enhancing-homepage-feed-relevance -by-harnessing-the-power-of-lar 7.​ Mohamed, A., & Li, Z. (2019, June 27). Community-focused Feed optimization. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2019/06/community-focused-feed-optimiza tion 8.​ Ouyang, Y., Gupta, V., Basu, K., Diciccio, C., Gavin, B., & Guo, L. (2020, August 27). Using Bayesian optimization for balancing metrics in recommendation systems. LinkedIn Engineering Blog. Retrieved from https://www.linkedin.com/blog/engineering/recommendations/using-bayesian-op timization-for-balancing-metrics-in-recommendat 9.​ Ghike, S., & Gupta, S. (2016, March 3). FollowFeed: LinkedIn's Feed Made Faster and Smarter. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2016/03/followfeed-linkedin-s-feed-made-f aster-and-smarter 10.​Gupta, R., Ovsankin, S., Li, Q., Lee, S., Le, B., & Khanal, S. (2022, April 26). Near real-time features for near real-time personalization. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2022/near-real-time-features-for-near-real -time-personalization 11.​ GV, F. (2022, September 28). Open Sourcing Venice: LinkedIn's Derived Data Platform. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2022/open-sourcing-venice-linkedin-s-deri ved-data-platform 12.​Hosni, Y. (2022, November). How LinkedIn Uses Machine Learning To Rank Your Feed. KDnuggets. Retrieved from https://www.kdnuggets.com/2022/11/linkedin-uses-machine-learning-rank-feed.h tml 13.​Jurka, T., Ghosh, S., & Davies, P. (2018, March 15). A Look Behind the AI that Powers LinkedIn's Feed: Sifting through Billions of Conversations to Create Personalized News Feeds for Hundreds of Millions of Members. LinkedIn Engineering Blog. Retrieved from Learn more about Trust Insights at TrustInsights.ai p. 65​
https://engineering.linkedin.com/blog/2018/03/a-look-behind-the-ai-that-powers- linkedins-feed-sifting-through 14.​Yu, Y. Y., & Saint-Jacques, G. (n.d.). Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn. [Unpublished manuscript/Preprint, contextually implied source]. 15.​Sanjabi, M., & Firooz, H. (2025, February 7). 360Brew : A Decoder-only Foundation Model for Personalized Ranking and Recommendation. arXiv. arXiv:2501.16450v3 [cs.IR] 16.​Firooz, H., Sanjabi, M., Jiang, W., & Zhai, X. (2025, January 2). LOST-IN-DISTANCE: IMPACT OF CONTEXTUAL PROXIMITY ON LLM PERFORMANCE IN GRAPH TASKS. arXiv. arXiv:2410.01985v2[cs.AI] New Sources (360Brew Era & Modern Ecosystem) 1.​ Sanjabi, M., Firooz, H., & 360Brew Team. (2025, August 23). 360Brew: A Decoder-only Foundation Model for Personalized Ranking and Recommendation. arXiv. arXiv:2501.16450v4 [cs.IR]. (Primary source for the 360Brew model, its prompt-based architecture, and In-Context Learning mechanism). 2.​ He, S., Choi, J., Li, T., Ding, Z., Du, P., Bannur, P., Liang, F., Borisyuk, F., Jaikumar, P., Xue, X., Gupta, V. (2025, June 15). Large Scalable Cross-Domain Graph Neural Networks for Personalized Notification at LinkedIn. arXiv. arXiv:2506.12700v1 [cs.LG]. (Primary source for the evolution from domain-specific GNNs to the holistic Cross-Domain GNN for candidate generation). 3.​ Firooz, H., Sanjabi, M., Jiang, W., & Zhai, X. (2025, January 2). LOST-IN-DISTANCE: IMPACT OF CONTEXTUAL PROXIMITY ON LLM PERFORMANCE IN GRAPH TASKS. arXiv. arXiv:2410.01985v2 [cs.AI]. (Primary source detailing the core technical challenge of long-context reasoning that informs the prompt engineering and history construction strategies for 360Brew). 4.​ Behdin, K., Dai, Y., Fatahibaarzi, A., Gupta, A., Song, Q., Tang, S., et al. (2025, February 20). Efficient AI in Practice: Training and Deployment of Efficient LLMs for Industry Applications. arXiv. arXiv:2502.14305v1 [cs.IR]. (Primary source for the practical deployment strategies, including knowledge distillation and pruning, used to create efficient SLMs from large foundation models like 360Brew for production use). 5.​ Lyu, L., Zhang, C., Shang, Y., Jha, S., Jain, H., Ahmad, U., & the OpenConnect Team. (2024). OpenConnect: LinkedIn’s next-generation AI pipeline ecosystem. LinkedIn Engineering Blog. (Contextually implied source describing the replacement of the legacy ProML platform with the modern OpenConnect/Flyte-based ecosystem for all AI/ML training pipelines). Learn more about Trust Insights at TrustInsights.ai p. 66​
6.​ Kwon, W., et al., & vLLM Team. (2024). How we leveraged vLLM to power our GenAI applications at LinkedIn. LinkedIn Engineering Blog. (Contextually implied source detailing the adoption of vLLM as the core inference serving engine for large-scale GenAI and foundation models at LinkedIn). We used Google's Gemini 2.5 Pro model to synthesize this guide from the data. The source data is approximately 325,000 words. Learn more about Trust Insights at TrustInsights.ai p. 67​

LinkedIn Marketing Guide: AI Algorithm Changes

  • 2.
    Table of Contents Tableof Contents Introduction How LinkedIn Works: A Big Picture Look at the Three Pillars Pillar 1: Training (OpenConnect) - The Factory That Builds the Engine Pillar 2: Reasoning (360Brew) - The Powerful New Engine Pillar 3: Serving (vLLM) - The System That Runs the Engine at Scale How LinkedIn’s Algorithms Actually Work Step 1: Candidate Generation (The Initial Longlist) Cross-Domain Graph Neural Networks (GNNs): The Holistic Scout Heuristics & Similarity Search: The Fast Scouts Step 2: The 360Brew Ranking & Reasoning Engine (The New L2 Ranker) What it is: An Expert on the Professional World How it Works: The Power of the Prompt The Core Mechanism: In-Context Learning & The Illusion of Live Learning Step 3: The Art of the Prompt - The New "Secret Sauce" The "Lost-in-Distance" Challenge: The AI's Attention Span History Construction: Building the Perfect Briefing, Automatically Bringing It All Together: A Look Inside the Prompt Step 4: Finalization, Diversity & Delivery Applying Final Business Rules: The Platform Guardrails Ensuring Feed Diversity: From Manual Rules to Automated Curation Delivery: Formatting for the Final Destination LinkedIn Profile Checklist for Marketers & Creators 1. Profile Photo & Background Photo 2. Headline 3. About (Summary) Section 4. Experience Section 5. Skills Section (Endorsements & Skill Badges) 6. Recommendations 7. Education, Honors & Awards, Certifications, etc. LinkedIn Content Pre-Launch Checklist for Creators I. Before You Post: Content Strategy & Creation 1. Topic Selection & Conceptual Alignment 2. Content Format Selection 3. Crafting High-Quality, Engaging Content II. As You Post: Optimizing for Discovery & Initial Engagement 4. Writing Compelling Copy & Headlines Learn more about Trust Insights at TrustInsights.ai p. 1​
  • 3.
    5. Strategic Useof Hashtags 6. Tagging Relevant People & Companies (When Appropriate) III. After You Post: Fostering Engagement & Learning 7. Engaging with Comments Promptly & Thoughtfully LinkedIn Engagement Checklist for Marketers and Creators I. Quick Daily Engagements (5-15 minutes per day) 1. Reacting Strategically to Relevant Feed Content 2. Brief, Insightful Comments on 1-2 Key Posts II. Focused Daily/Regular Engagements (15-30 minutes per day/several times a week) 3. Participating Actively in 1-2 Relevant LinkedIn Groups 4. Sending Personalized Connection Requests III. More Involved Weekly/Bi-Weekly Engagements (30-60+ minutes per session) 5. Writing and Publishing LinkedIn Articles or Newsletters 6. Reviewing and Endorsing Skills for Connections LinkedIn Newsfeed Technologies I. Offline Ecosystem: AI Asset Generation & Training II. Real-Time Data Infrastructure III. Online Serving Funnel (Real-Time Inference) Advertisements About TrustInsights.ai Methodology and Disclosures Sources Learn more about Trust Insights at TrustInsights.ai p. 2​
  • 4.
    Introduction If you've spentmore than five minutes on LinkedIn in the last year, you've doubtlessly seen one or more "gurus" making definitive claims that they've “cracked the new algorithm.” They’ll tell you the magic number of comments to leave, the exact time to post, or the one type of content that gets "10x reach." Comment on their post within the first hour, they promise, and they’ll sell you the secret to boosting your performance on LinkedIn. For a long time, that advice, while often simplistic, was at least pointed in the right direction. It was based on the idea of a complex, multi-stage pipeline of machine learning models that processed signals. A like was a signal. A comment was a stronger signal. A keyword was a signal. The game was to send the best signals to a sophisticated but ultimately mechanical system. Our previous guides, including the Mid 2025 Edition, were designed to help you understand that very system. That's not how LinkedIn works anymore. The reality is that the slow, incremental evolution of LinkedIn's feed has been superseded by a sudden, fundamental revolution. The change is so profound that most existing advice, even from just 4 months ago, is now obsolete (for real, we deleted our guide from May). LinkedIn hasn’t just upgraded its engine; it has ripped out the entire mechanical assembly line and replaced it with a single, powerful, centralized brain. What that means for you and me is that the old game of “sending signals” is over. The new game is about having a conversation. There is no “hack” for a system that is designed to understand language, context, and reasoning in a way that mirrors human comprehension. This isn't just a new algorithm. It's an entirely new ecosystem, and it runs on a different fuel: language. What that also means is that if we understand this new ecosystem, how its central reasoning engine thinks, and what it values, we can align our efforts with the way it is built. This isn't hacking anything. This is learning how to communicate effectively. It's about moving from crafting signals for a machine to crafting a compelling narrative for an intelligent reader. So how do we do this? By listening, once again, to what LinkedIn has had to say. In this totally unofficial guide, still not at all endorsed by anyone at LinkedIn, we have synthesized a new wave of academic papers, engineering blogs, and conference presentations from LinkedIn's own AI researchers. We've used generative AI to boil down dozens of new Learn more about Trust Insights at TrustInsights.ai p. 3​
  • 5.
    sources that describethis paradigm shift in detail. They’ve given this new system a name—360Brew—and have been surprisingly open about how it works. Each step of the process we outline details what we can do to best work with this new architecture. This guide is STILL not endorsed or approved by LinkedIn; no LinkedIn employees were consulted outside of using public data published by those employees. After the walkthrough of this new ecosystem, you'll find THREE updated toolkits you can copy and paste into your favorite generative AI tool. These checklists have been completely re-framed to align with this new, language-driven paradigm: The LinkedIn Profile Checklist​ Use this toolkit to transform your profile from a list of data points into a compelling dossier that communicates your expertise directly to the reasoning engine. The LinkedIn Content Pre-Launch Checklist​ Use this toolkit to craft content that is not just keyword-optimized, but is structured, well-reasoned, and conversation-starting—the exact qualities the new system is designed to identify and amplify. The LinkedIn Engagement Checklist​ Use this toolkit to guide your daily and weekly activities, helping you provide the highest-quality context signals that inform the AI's in-context learning and build a strong foundation for your visibility. The age of chasing algorithmic hacks is over. The age of clear, compelling, and valuable communication has begun. This guide will show you how to thrive in it. Got Questions? This guide comes with a NotebookLM instance you can interactively ask questions from: https://notebooklm.google.com/notebook/d7b059b9-ba79-4ade-a4fc-2d18d8e26588 Learn more about Trust Insights at TrustInsights.ai p. 4​
  • 6.
    A Word FromOur Sponsor, Us Ready to transform your AI marketing approach? Done By You services from Trust Insights: -​ Almost Timeless: 48 Foundation Principles of Generative AI: A non-technical AI book by cofounder and Chief Data Scientist Christopher Penn, Almost Timeless teaches you how to think about AI and apply it to your organization. Done With You services from Trust Insights: -​ AI-Ready Strategist: Ideal for CMOs and C-Suite leaders, the AI-Ready Strategist teaches you frameworks and methods for developing, deploying, and managing AI at any scale, from the smallest NGO to the largest enterprises with an emphasis on people, process, and governance. -​ Generative AI Use Cases for Marketers course: Learn the 7 major use case categories for generative AI in marketing with 21 different hands-on exercises, all data and prompts provided. -​ Mastering Prompt Engineering for Marketing course: Learn the foundation skills you need to succeed with generative AI including 3 major prompt frameworks, advanced prompting techniques, and how to choose different kinds of prompts based on the task and tool Done For You services from Trust Insights: -​ Customized consulting: If you love the promise of analytics, data science, and AI but don’t love the huge amount of work that goes into fulfilling that promise, from data governance to agentic AI deployment, let us do it for you. We’ve got more than a decade of real-world AI implementation (AI existed long before ChatGPT) built on your foundational data so you can reap the benefits of AI while your competitors are still figuring out how to prompt. -​ Keynote talks and workshops: Bring Trust Insights to your event! We offer customized keynotes and workshops for conferences, company retreats, executive leadership meetings, annual meetings, and roundtables. Every full-fee talk is customized to your event, industry, or company, and you get the talk recording and materials (transcripts, prompts, data) for your audience to work with and learn from. Learn more about Trust Insights at TrustInsights.ai p. 5​
  • 7.
    The Shift froma Feature Factory to a Reasoning Engine To understand how radically things have changed, we first need to understand the old world of "features." For years, recommendation systems, including LinkedIn's, were built like intricate assembly lines. Your profile, your posts, and your activity were chopped up into thousands of tiny, distinct pieces of data called "features." A feature is a number, a category, a simple signal. For example: ●​ comment_count_last_24h = 12 ●​ is_in_same_industry = 1 (for yes) or 0 (for no) ●​ post_contains_keyword_AI = 1 ●​ author_seniority_level = 5 The old system took these thousands of numerical signals and fed them into a series of specialized models (our previous guide called them L0, L1, and L2 rankers). Each model in the pipeline was an expert at one thing: the L0 found thousands of potential posts, the L1 narrowed them down, and the L2 performed the final, sophisticated ranking. It was an industrial marvel of machine learning engineering, a "feature factory" that was incredibly good at optimizing for signals. But this approach had inherent limitations. It struggled with nuance. It couldn't easily understand a new job title it had never seen before. It required massive teams of engineers to constantly create and maintain these features, a process that created enormous technical debt. Most importantly, it was a system that processed numbers, not ideas. The new ecosystem throws that entire assembly line away. At its heart now sits a single, massive foundation model. A foundation model is a type of AI, like those powering ChatGPT or Claude, that is pre-trained on a colossal amount of text and data, enabling it to acquire a general understanding of language, concepts, and reasoning. LinkedIn's researchers took a powerful open-source foundation model (specifically, Mistral AI's Mixtral 8x22B model) and then subjected it to an intense, months-long "PhD program" focused exclusively on LinkedIn's professional ecosystem. They trained it on trillions of tokens of data representing the interactions between members, jobs, posts, skills, and companies. Learn more about Trust Insights at TrustInsights.ai p. 6​
  • 8.
    The result is360Brew. It is not a feature factory; it is a reasoning engine. It no longer asks, "What are the numerical features for this post?" Instead, the system itself writes a prompt, in plain English, that looks something like this: "Here is the complete profile of a Senior Product Manager at Salesforce. Here is their recent activity: they've commented on posts about product strategy and liked articles about market positioning. Here is a new post from a marketing expert about a novel go-to-market framework. Based on everything you know, predict the probability that this Product Manager will comment on this new post." This is not a signal-processing problem. This is a reading comprehension and reasoning problem. The model reads the text, understands the concepts, contextualizes the member's past behavior, and makes a nuanced prediction. It has moved from arithmetic to analysis. The New Premise: Your Success is Determined by Your Prose This fundamental shift changes everything for marketers, creators, and professionals on the platform. The quality of your visibility is no longer determined by how well you can feed signals into a machine, but by how compellingly you can articulate your value in plain language. Your text is the input. This new reality is built on two core capabilities of modern foundation models: 1.​ Zero-Shot Reasoning: 360Brew can understand and reason about concepts it hasn't been explicitly trained on. Because it understands language, it can see a new job title like "Chief Metaverse Officer" and infer its likely seniority, relevant skills, and industry context, even if it has never seen that exact title before. It no longer needs a feature called job_title_id = 9875. 2.​ In-Context Learning (ICL): This is perhaps the most crucial concept for you to understand. The model learns on-the-fly from the information provided directly in the prompt. A member's recent activity isn't just aggregated into a historical "embedding" over time; it's presented as a series of fresh examples in the moment. When the model sees that you just liked three posts about artificial intelligence, it learns, for that specific session, that you are highly interested in AI right now. It creates a temporary, personalized model for you, in real-time, based on the context it's given. Learn more about Trust Insights at TrustInsights.ai p. 7​
  • 9.
    With this understanding,we can establish the new guiding principles for success on LinkedIn. Think of your presence on the platform as creating a dossier for this powerful reasoning engine. ●​ Your Profile is the Dossier's Executive Summary. The AI reads it. Your headline is not just a collection of keywords; it is the title of your professional story. Your About section is no longer optional filler; it is the abstract that provides the essential narrative and context for everything else. Your Experience descriptions are the evidence, the case studies that prove your expertise. The prose you use to describe your accomplishments is a direct, primary input to the ranking and recommendation engine. ●​ Your Content is the Case Study. Each post you create is a new piece of evidence presented to the engine. It is evaluated not just on its topic, but on its structure, clarity, and the value of its argument. A well-written, insightful post that clearly articulates a unique perspective is now, by its very nature, optimized for the system. The model is designed to identify and reward expertise, and the primary way it understands expertise is by analyzing the language you use to express it. ●​ Your Engagement is the Live Briefing. Your likes, comments, and shares are no longer just simple positive (+1) or negative (-1) signals. They are the examples you provide for in-context learning. When you leave a thoughtful comment on an expert's post, you are telling the AI, "This is the caliber of conversation I find valuable. This is part of my professional identity." You are actively curating the real-time examples that the model uses to understand you, making your engagement a powerful tool for shaping your own content distribution. Learn more about Trust Insights at TrustInsights.ai p. 8​
  • 10.
    How LinkedIn Works:A Big Picture Look at the Three Pillars A revolutionary model like 360Brew, with its ability to read, reason, and understand language, doesn't just spring into existence. It is the end product of a colossal industrial and engineering effort. To think that you are simply interacting with "an algorithm" is like looking at a brand-new electric vehicle and calling it "a wheel." You are missing the vast, complex, and deeply interconnected ecosystem that makes it possible. To truly understand how to work with this new system, you can't just look at the engine; you have to understand the entire assembly line. The modern LinkedIn AI is built on three foundational pillars, a lifecycle that can be understood as: Build, Think, and Run. 1.​ The Factory (OpenConnect): The sophisticated, high-speed manufacturing plant where the AI models are built, trained, and continuously improved. 2.​ The Engine (360Brew): The powerful, centralized reasoning engine that, once built, does the "thinking"—analyzing your profile, content, and interactions to make decisions. 3.​ The Operating System (vLLM): The high-performance infrastructure that "runs" the engine at a global scale, serving real-time predictions to over a billion members. Understanding these three pillars will give you a complete picture of the forces shaping your visibility on the platform. It will move you from guessing at tactics to developing a durable strategy based on the fundamental principles of the entire ecosystem. Pillar 1: Training (OpenConnect) - The Factory That Builds the Engine Before a single recommendation can be made, the AI model itself must be built. This is a process of immense scale, involving the ingestion and processing of petabytes of data—the digital equivalent of all the books in the Library of Congress, multiplied many times over. The "factory" where this happens is LinkedIn's next-generation AI pipeline platform, called OpenConnect. To appreciate how significant this is, you have to understand the old factory it replaced. For years, LinkedIn's AI pipelines ran on a legacy system called ProML. While powerful for its time, it became the digital equivalent of a turn-of-the-century assembly line: slow, rigid, Learn more about Trust Insights at TrustInsights.ai p. 9​
  • 11.
    and prone tobottlenecks. LinkedIn's own engineers reported that making a tiny change to a model—tweaking a single parameter—could require a full 15-minute rebuild of the entire workflow. Imagine if a car factory had to shut down and retool the entire assembly line just to change the color of the paint. The pace of innovation was throttled by the very tools meant to enable it. OpenConnect is the gleaming, modern gigafactory that replaced it. It was built on principles of speed, reusability, and resilience, and its design directly impacts how quickly the platform's AI can evolve. For you as a marketer or creator, a smarter factory means a smarter AI that learns and adapts faster. Here’s how it works: Core Principle 1: Standardized, Reusable Parts (The Component Hub) A modern factory doesn't build every single screw and bolt from scratch for every car. It uses standardized, high-quality components from trusted suppliers—a Bosch fuel injector, a Brembo brake system. OpenConnect does the same for AI. LinkedIn's platform and vertical AI teams have built a comprehensive library of reusable, pre-approved "components." These are standardized pieces of code for common tasks: a component for collecting data, a component for analyzing a model's performance, a component for processing text. Teams can then assemble these trusted components to build their unique AI pipelines. ●​ What this means for you: This ensures a level of quality and consistency across the entire platform. The AI component that understands your job title in the context of the feed is built from the same foundational block as the one that matches your profile to a job description. This shared understanding allows the AI to make more coherent connections about you across different parts of the platform. When one component is improved—for example, a better way to understand skills from text—that improvement can ripple across the ecosystem, making the entire system smarter at once. Core Principle 2: The Need for Speed (Decoupling and Caching) The biggest problem with the old factory was its speed. Everything was tangled together. A change in one area required rebuilding everything. OpenConnect solved this with two key innovations that are simple to understand. ●​ Decoupling: The new system isolates every component. Changing one part of the pipeline no longer requires rebuilding the whole thing. It’s like a modern pit crew: they can change a tire without having to touch the engine. Learn more about Trust Insights at TrustInsights.ai p. 10​
  • 12.
    ●​ Caching: Thesystem intelligently saves the results of previous work. Once a component has been built, it's stored in a ready-to-use state (in a Docker image or a manifest file). When an engineer wants to run an experiment, the system doesn't rebuild everything from scratch; it pulls the pre-built components off the shelf and runs them immediately. LinkedIn reports that this new architecture reduced workflow launch times from over 14 minutes to under 30 seconds. ●​ What this means for you: This dramatic increase in speed for LinkedIn's engineers translates into a faster pace of innovation for you. An engineer who can run 20 experiments a day instead of one is an engineer who can test more ideas, find what works, and improve the AI much faster. This is why the feed, job recommendations, and other AI-driven features seem to be evolving at a breakneck pace. The factory is running at full speed, constantly running A/B tests and shipping improvements. The "algorithm" is no longer a static target; it's a rapidly evolving system, and this factory is the reason why. Core Principle 3: Unwavering Reliability (Disruption Readiness) An AI factory that processes petabytes of data is under immense strain. Servers need maintenance, networks can have hiccups, and hardware can fail. In the old world, a disruption like a server reboot could kill a multi-day training job, forcing engineers to start over from scratch, wasting days of work and computational resources. OpenConnect was designed for resilience. It uses a system of active checkpointing, which is a fancy way of saying it is constantly saving its work. During a long training process, the system automatically saves the model's parameters, its progress, and exactly where it was in the dataset. If a disruption occurs, Flyte (the open-source workflow engine at the heart of OpenConnect) simply restarts the job on a new set of servers, and it picks up from the last checkpoint. LinkedIn states this has reduced training failures due to infrastructure disruptions by 90%. ●​ What this means for you: The platform's core AI is more robust and reliable than ever. This stability allows LinkedIn to train even larger, more complex models with confidence, knowing that the process won't be derailed. This industrial-grade reliability is a prerequisite for building a foundation model as massive and critical as 360Brew. Learn more about Trust Insights at TrustInsights.ai p. 11​
  • 13.
    Pillar 2: Reasoning(360Brew) - The Powerful New Engine After the OpenConnect factory has processed the data and assembled the training pipeline, it's time to build the engine itself: 360Brew. As we discussed, this is the centralized brain of the new ecosystem. It is a massive, 150B+ parameter foundation model that has been meticulously fine-tuned to become the world's leading expert on the professional economy as represented by LinkedIn's data. To understand how this engine "thinks," we need to look beyond its size and explore its architecture, its education, and its inherent limitations. Inside the Engine: A Boardroom of Experts (Mixture-of-Experts) 360Brew is based on the Mixtral 8x22B architecture, which uses a clever design called Mixture-of-Experts (MoE). This is a critical detail that makes a model of this scale feasible. Imagine a traditional AI model as a single, giant brain. To answer any question, the entire brain has to light up and work on the problem. This is computationally expensive. An MoE model, however, works more like a boardroom of specialized consultants. The Mixtral 8x22B model, for example, can be thought of as having 8 distinct "expert" networks. When a piece of text comes in (a "token"), the model's internal routing system intelligently selects the two most relevant experts to handle it. If the token is part of a marketing post, it might route it to the experts in business language and strategy. The other six experts remain dormant, saving energy. This design allows the model to have a colossal number of total parameters (giving it vast knowledge) while only using a fraction of them for any given task, making it far more efficient to train and run than a traditional model of equivalent size. ●​ What this means for you: The engine ranking your content is not a monolithic generalist; it's a team of specialists. This architecture allows for a much deeper and more nuanced understanding of different professional domains. It can apply its "engineering expert" to understand a technical post and its "sales expert" to understand a post about enterprise selling, leading to more accurate and context-aware evaluations. The Curriculum: A PhD in the Professional World Learn more about Trust Insights at TrustInsights.ai p. 12​
  • 14.
    A base foundationmodel like Mixtral starts with a generalist "bachelor's degree" from reading the public internet. To turn it into 360Brew, LinkedIn put it through an intensive post-graduate program. This training happens in several key stages: 1.​ Continuous Pre-training (The Daily Reading): This is where the model ingests trillions of tokens of LinkedIn's proprietary data. It reads member profiles, job descriptions, posts, articles, and the connections between them. This is the phase where it moves beyond general internet knowledge and learns the specific language, entities, and relationships of the professional world. It learns that "PMP" is a certification, that "SaaS" is an industry, and that a "VP of Engineering" is more senior than a "Software Engineer." 2.​ Instruction Fine-Tuning (The Classroom Learning): After it has the knowledge, the model is taught how to use it. In this stage, it's trained on a dataset of questions and answers relevant to LinkedIn's tasks. It learns to follow instructions, like the ones in the prompts we've discussed. This is where it learns to take a request like "Predict whether this member will like this post" and understand the desired output format and reasoning process. 3.​ Supervised Fine-Tuning (The On-the-Job Training): Finally, the model is trained on millions of real-world examples with known outcomes. It's shown a member's profile, their history, a post, and then the correct answer: "this member did, in fact, comment." By training on countless examples, it refines its ability to predict future actions with high accuracy. ●​ What this means for you: The 360Brew engine has an incredibly deep and nuanced understanding of the professional world. Its knowledge is not superficial; it has been trained to understand the intricate web of relationships between skills, job titles, companies, and content that defines your professional identity. It understands that a comment from a recognized expert in your field is more significant than a like from a random connection. The Engine's Blind Spot: The "Lost-in-Distance" Problem No system is perfect, and this one has a fascinating and critical limitation that stems directly from its architecture. Researchers at LinkedIn (and elsewhere) discovered that while foundation models are great at handling long contexts, their performance suffers from a phenomenon they've termed "Lost-in-Distance." Think of it like reading a long and complex report. You are very likely to remember the main point from the introduction (the beginning of the context) and the final conclusion (the end of the context). However, if the report requires you to connect a subtle detail on page 32 Learn more about Trust Insights at TrustInsights.ai p. 13​
  • 15.
    with another crucialdetail on page 157, you might miss the connection. The two pieces of information are simply too far apart for you to easily reason across them. Foundation models have the same problem. When given a long prompt containing a member's interaction history, the model is very good at using information at the beginning and end of that history. But if two critical, related data points are separated by thousands of other tokens in the middle of the prompt, the model may struggle to connect them and its predictive accuracy can drop significantly. ●​ What this means for you: This is one of the most important technical limitations for a marketer to understand. It means that the structure and curation of the information presented to the AI are just as important as the information itself. When LinkedIn builds a prompt for 360Brew, their engineers cannot simply dump your entire chronological history into it. They must use intelligent strategies—like prioritizing your most recent interactions, or finding past interactions that are most similar to the content being ranked—and place that crucial information where the model is most likely to "see" it. This is why a clear, concise profile and well-structured content are so valuable: they make your key information easier for the model to find and reason with, overcoming its inherent blind spot. Pillar 3: Serving (vLLM) - The System That Runs the Engine at Scale Once the OpenConnect factory has built the 360Brew engine, there's one final, monumental challenge: how do you actually run it? A 150B+ parameter model is an incredible piece of technology, but it's also a computational behemoth. You can't just install it on a standard server. Running it for a single user is demanding; running it for hundreds of millions of active users in real-time is an engineering problem of the highest order. This is where the third pillar comes in: the serving infrastructure. If 360Brew is the Ferrari engine, the serving layer is the custom-built chassis, transmission, and fuel-injection system required to get that power to the road without it tearing itself apart. For this, LinkedIn has turned to a popular, high-performance open-source framework called vLLM. For a marketer, the deep technical details of vLLM (like its PagedAttention memory management) are not important. What is important are the implications of LinkedIn's choice to build its serving layer on top of a rapidly evolving, globally developed open-source project. Learn more about Trust Insights at TrustInsights.ai p. 14​
  • 16.
    The Implication: ASystem in Constant Flux By adopting vLLM, LinkedIn's AI serving stack is not a static, internally developed piece of software that gets updated once or twice a year. It is a living, breathing system that benefits from the collective R&D of the entire global AI community. LinkedIn's own engineers have documented their journey through multiple versions of vLLM in just a matter of months, with each new version bringing significant performance improvements. They are not just consumers of this technology; they are active contributors, submitting their own performance optimizations back to the open-source project for everyone to use. ●​ What this means for you: This is the ultimate reason why chasing short-term "hacks" is a fool's errand. The very foundation on which the ranking engine runs is changing and improving on an ongoing basis. A loophole or quirk you might discover in the system's behavior at breakfast could be completely gone by lunch, not because a LinkedIn product manager decided to change it, but because the underlying vLLM engine received an update that changed how it schedules and processes requests on the GPU. This constant state of flux and improvement makes it impossible to "game" the system for long. The only durable strategy is to align with the core principles of the ecosystem: providing the highest quality textual information via your profile, creating valuable and well-reasoned content, and engaging in a way that builds a strong, coherent professional identity. These are the inputs that will always be valued, regardless of how the underlying technology evolves. Learn more about Trust Insights at TrustInsights.ai p. 15​
  • 17.
    How LinkedIn’s AlgorithmsActually Work Now that we understand the three pillars of the modern LinkedIn AI—the factory (OpenConnect), the engine (360Brew), and the operating system (vLLM)—we can explore how they work together to deliver the content you see in your feed. The old concept of a rigid, multi-stage funnel (L0, L1, L2) has been fundamentally transformed. The assembly line of specialized models has been replaced by a more dynamic and intelligent workflow centered around the powerful 360Brew reasoning engine. However, the core challenge remains the same: scale. Every day, billions of potential posts, updates, and articles are created across the platform. It is computationally impossible, even for an engine as efficient as 360Brew, to read and evaluate every single one of them for every member. The system still needs a way to intelligently narrow down this vast ocean of content to a small, manageable stream of the most promising items. Therefore, the funnel still exists, but it has been reimagined. It's no longer a series of ever-smarter filters; it's a two-stage process. First, an incredibly broad but efficient "scouting" system finds a few thousand promising candidates. Then, the expert "general manager"—360Brew—is brought in to do the deep, nuanced final evaluation. This section will walk you through this reimagined funnel step-by-step. For each stage, we will explain: ●​ What happens: A description of the technical process taking place. ●​ So what?: The direct implications for you as a marketer, creator, or professional. ●​ Now what?: Action-oriented guidance on how you can align your strategy with this stage of the process. Step 1: Candidate Generation (The Initial Longlist) This is the very top of the funnel, where the magic of personalization begins. The system's first task is to sift through the billions of potential posts in the LinkedIn universe and select a few thousand that might be relevant to you. This is a game of recall, not precision. The goal is not to find the single best post, but to ensure that the best post is somewhere in the initial pool of candidates. If your content doesn't make it into this initial "longlist," it has zero chance of being seen, no matter how good it is. Learn more about Trust Insights at TrustInsights.ai p. 16​
  • 18.
    To accomplish thisat an incredible speed, the system uses several efficient methods running in parallel. These methods are designed to be fast and broad, casting a wide net to pull in a diverse set of potential content. The two primary methods are Cross-Domain Graph Neural Networks and Heuristic-Based Retrieval. Cross-Domain Graph Neural Networks (GNNs): The Holistic Scout ●​ What happens:​ LinkedIn maintains a colossal, constantly updated map of its entire professional ecosystem, known as the Economic Graph. This isn't just a list of members and companies; it's a complex web of interconnected nodes and edges representing every entity and every interaction: you (a node) are connected to your company (a node) with an "works at" edge; you are connected to another member with a "connection" edge; you are connected to a post with a "liked" edge.​ A Graph Neural Network (GNN) is a specialized type of AI designed to learn from this very structure. It can "walk" the graph, learning patterns from the relationships between nodes. The most significant evolution here is that LinkedIn's GNN is now cross-domain.​ Previously, a GNN might have been trained just on Feed data to recommend Feed content. The new Cross-Domain GNN is holistic. It ingests and learns from your activity across the entire platform. It sees the jobs you click on, the notifications you open, the influencers you follow in your email digests, the skills you endorse, and the articles you share. It then uses this complete, 360-degree view of your professional interests to find potential content. For example, if you've recently started clicking on job postings for "Product Marketing Manager," the GNN learns that you are interested in this topic. It can then walk the graph to find high-quality posts, articles, and discussions about product marketing, even if you've never explicitly engaged with that topic in the feed before. It uses your behavior in one domain (Jobs) to inform its recommendations in another (the Feed). ●​ So what?:​ This means your professional identity on LinkedIn is no longer siloed. The system is building a single, unified understanding of you based on the totality of your actions. Every click, every follow, every job application is a piece of data that refines your "member embedding"—your unique digital fingerprint on the Economic Graph. The system is constantly trying to answer the question, "Based on everything this member does on our platform, what are they truly interested in professionally?" Your content gets pulled into the longlist if its own "graph neighborhood"—the topics, Learn more about Trust Insights at TrustInsights.ai p. 17​
  • 19.
    skills, and peopleit's connected to—strongly overlaps with a member's holistic interests. ●​ Now what?:​ Your goal is to create a clear, consistent, and coherent professional identity across the entire platform, not just in your posts. ○​ Build a Relevant Network: Your connections are a primary signal. Connect with professionals in your target industry and with individuals who engage with the kind of content you create. When your connections engage with your content, it signals to the GNN that your post is relevant to that specific "graph neighborhood," increasing its chances of being shown to their connections (your 2nd and 3rd-degree network). ○​ Maintain Your Profile as Your Professional Hub: The skills listed on your profile, the job titles you've held, and the companies you've worked for are powerful, stable nodes in the graph. The GNN uses this information as an anchor for your identity. If your profile clearly states your expertise in "B2B SaaS Marketing," the GNN is far more likely to identify your content on that topic as relevant. ○​ Engage Authentically Beyond the Feed: Your activity is not just about feed engagement. Clicking on a job ad, following a company, or even watching a LinkedIn Learning video are all signals that the Cross-Domain GNN uses. Engage with the platform in a way that authentically reflects your professional interests and goals. This holistic activity provides the rich data the GNN needs to understand who you are and, by extension, who your content is for. Heuristics & Similarity Search: The Fast Scouts While the GNN is incredibly powerful for understanding deep relational patterns, it's also computationally intensive. To supplement it, the system uses several faster methods to ensure the longlist is filled with timely, fresh, and obviously relevant content. ●​ What happens:​ This stage uses a combination of simpler, rule-based methods (heuristics) and efficient search techniques to quickly find candidates. ○​ Heuristic-Based Retrieval: These are simple, common-sense rules that can be executed at massive scale with very low latency. Examples include: Learn more about Trust Insights at TrustInsights.ai p. 18​
  • 20.
    ■​ Timeliness: Showvery recent posts from a member's direct connections. ■​ Recent Interaction: If a member just commented on one of your posts, the system is more likely to pull your next post into their longlist. ■​ Velocity: Posts that are gaining unusually high engagement (likes, comments) very quickly are flagged and pulled into more longlists to see if they have viral potential. ○​ Similarity Search (Embedding-Based Retrieval): This is a more sophisticated but still incredibly fast method. As we've discussed, every piece of content and every member has an "embedding" or digital fingerprint. The system can take your member embedding and, in a fraction of a second, search a massive database (using specialized technology like FAISS or ScaNN) for posts with the most similar embeddings. "Similar" can mean many things: similar topics, similar style, or content liked by members with similar profiles to yours. This allows the system to find topically relevant content even if it's from creators you're not connected to. ●​ So what?:​ This means that speed and topical clarity are crucial factors for getting into the initial longlist. While the GNN looks at your deep, long-term identity, these faster methods are focused on the "here and now." A well-timed post on a trending topic, or one that gets a quick burst of initial engagement, can leverage these heuristics to get a significant initial boost in visibility. Similarly, content that has a very clear and distinct topical focus will have a "sharper" embedding, making it easier for the similarity search to find and match it with the right audience. ●​ Now what?:​ Your strategy here should be focused on creating clear, timely content and fostering immediate engagement. ○​ Be Clear and Specific: When you write a post, have a single, clear topic in mind. A post about "The Impact of AI on B2B SaaS Go-to-Market Strategy" will generate a much more distinct and matchable embedding than a vague post about "The Future of Business." Avoid muddled or overly broad topics in a single post if you want to be found via similarity search. ○​ Encourage Quick Engagement: The first hour after you post is still critically important. This is the window where you can trigger the velocity-based heuristics. Encourage discussion by asking questions. Be present to reply to comments immediately. This initial flurry of activity is a powerful signal that your content is resonating, which can significantly increase its initial reach. Learn more about Trust Insights at TrustInsights.ai p. 19​
  • 21.
    ○​ Engage Authenticallywith Others: The recent interaction heuristic is a two-way street. When you thoughtfully engage with content from others in your target audience, you are increasing the probability that the system will show your next post to them. Authentic engagement is not just about building relationships; it's a direct technical signal to the candidate generation system. ○​ Tap into Trending Topics (When Relevant): If there is a significant conversation happening in your industry, creating a timely and insightful post on that topic can leverage the system's ability to identify and boost trending content. Don't force it, but when a topic aligns with your expertise, timeliness can be a powerful amplifier. By understanding this first crucial step, you can see that getting visibility is not about a single magic bullet. It's about building a strong, coherent professional identity (for the GNN) while also creating clear, timely, and engaging content (for the faster retrieval methods). If you can successfully align your efforts with both of these systems, you will maximize your chances of getting your content into the initial longlist—the gateway to the powerful reasoning of the 360Brew engine. Step 2: The 360Brew Ranking & Reasoning Engine (The New L2 Ranker) Once the wide net of Candidate Generation has pulled in a few thousand promising posts, the system moves to the second and most critical stage of the reimagined funnel. This is where the brute force of recall gives way to the surgical precision of relevance. The thousands of candidates are now handed over to the new centerpiece of LinkedIn's AI ecosystem: the 360Brew Ranking & Reasoning Engine. If the previous step was about scouting for potential, this step is the final, in-depth interview. Each candidate post is rigorously evaluated against your specific profile and recent behavior to answer one fundamental question: "Out of all these options, which handful of items are the most valuable to this specific member, right now?" This is the new "L2 Ranker," and its arrival marks the single greatest shift in how LinkedIn's feed works. The old L2 ranker, a model named LiGR, was a marvel of traditional machine learning—a highly specialized Transformer model trained to find complex patterns in numerical features. 360Brew is a different species entirely. It does not think in numbers and features; it thinks in language and concepts. Learn more about Trust Insights at TrustInsights.ai p. 20​
  • 22.
    What it is:An Expert on the Professional World ●​ What happens:​ 360Brew is a massive foundation model with over 150 billion parameters, built upon the powerful open-source Mixtral 8x22B Mixture-of-Experts (MoE) architecture. As we discussed in Part 1, this means it's not just one giant AI brain but a "boardroom" of specialized expert networks, making it incredibly knowledgeable and efficient.​ But its power doesn't come from its base architecture alone. Its true expertise comes from its education. After starting with a general understanding of the world from reading the public internet, LinkedIn put it through an intensive, multi-stage fine-tuning process fed by trillions of tokens of proprietary data.​ This process imbued 360Brew with a deep, nuanced understanding of the professional world that no generic AI could ever possess. It learned the intricate relationships between job titles, skills, industries, companies, and seniority levels. It learned the subtle differences in language between a software engineer and a product manager, or between a sales leader and a marketing executive. It learned to identify credible expertise, to understand conversational dynamics, and to recognize the markers of a valuable professional discussion. It is, for all intents and purposes, the world's foremost expert on the LinkedIn Economic Graph, capable of reading and interpreting its data with near-human comprehension. ●​ So what?:​ The engine evaluating your content is no longer a pattern-matching machine looking for statistical correlations. It is an expert reader with deep domain knowledge. This has profound implications. A system that understands concepts can see beyond superficial keywords. It can understand that a post about "reducing customer acquisition cost" is highly relevant to a VP of Sales, even if the post never uses the word "sales." It can recognize that a detailed, well-structured argument from a known expert is more valuable than a shallow, clickbait post, even if the latter uses more trendy hashtags. The bar for quality has been raised. Your content is no longer being judged by a machine on its signals, but by an expert on its substance. ●​ Now what?:​ You must shift your mindset from creating "content that the algorithm will like" to creating "content that an expert in your field would find valuable." ○​ Write for an Intelligent Audience: Assume the reader (the AI) is the smartest person in your industry. Avoid jargon for jargon's sake, but don't shy away Learn more about Trust Insights at TrustInsights.ai p. 21​
  • 23.
    from using precise,professional language. Explain complex topics with clarity and depth. The model is trained to recognize and reward genuine expertise, which is demonstrated through the quality and coherence of your writing. ○​ Demonstrate, Don't Just Declare: Don't just put "Marketing Expert" in your headline. Demonstrate that expertise in your content. Share unique insights, provide a contrarian (but well-reasoned) take on a popular topic, or create a detailed framework that helps others solve a problem. 360Brew is designed to evaluate the substance of your contribution, not just the labels you attach to it. ○​ Focus on Your Niche: An expert model respects niche expertise. A deep, insightful post for a specific audience (e.g., "A Guide to ASC 606 Revenue Recognition for SaaS CFOs") is now more likely to be identified and shown to that exact audience than ever before. The model's deep domain knowledge allows it to perform this highly specific matchmaking with incredible accuracy. Don't be afraid to go deep; the system can now follow you there. How it Works: The Power of the Prompt ●​ What happens:​ The most revolutionary aspect of 360Brew is how it receives information. It does not ingest a long list of numerical features. Instead, for each of the thousands of candidate posts, the system constructs a detailed prompt in natural language. This prompt is a dynamically generated briefing document, a bespoke dossier created for the sole purpose of evaluating a single post for a single member.​ Based on the research papers, this prompt is assembled from several key textual components: ○​ The Instruction: A clear directive given to the model, such as, "You are provided a member's profile, their recent activity, and a new post. Your task is to predict whether the member will like, comment on, or share this post." ○​ The Member Profile: The relevant parts of your profile, rendered as text. This includes your headline, your current and past roles, and likely key aspects of your About section. ○​ The Past Interaction Data: A curated list of your most recent and relevant interactions on the platform, also rendered as text. For example: "Member has commented on the following posts: [Post content by Author A]... Member has liked the following posts: [Post content by Author B]..." Learn more about Trust Insights at TrustInsights.ai p. 22​
  • 24.
    ○​ The Question:The candidate post itself, including its text, author, and topic information. ○​ The Answer: The model's task is to generate the answer, predicting your likely action. ●​ The engine reads this entire document, from top to bottom, for every single candidate post it evaluates for you. ●​ So what?:​ This means that every ranking decision is a fresh, context-rich evaluation. The system is not just matching your static profile against a static post. It is performing a holistic analysis that takes into account your identity, your immediate interests, and the content of the post in a single, unified comprehension task. This is why the quality of the text on your profile and in your content has become the most critical factor for success. Poorly written, unclear, or keyword-stuffed text creates a muddled, low-quality prompt, which will lead to a poor evaluation. Clear, compelling, and well-structured text creates a high-quality prompt that allows the engine to make a more accurate and favorable assessment. ●​ Now what?:​ You must treat every piece of text you create on LinkedIn as a direct input for these prompts. Your goal is to make the dossier about you as clear, compelling, and impressive as possible. ○​ Optimize Your Profile's Narrative: Go back and read your headline, About section, and experience descriptions out loud. Do they tell a coherent and compelling story of your professional value? An AI that reads for a living will be able to tell the difference between a thoughtfully crafted narrative and a jumbled list of buzzwords. ○​ Craft Your Posts for Readability: Structure your posts for clarity. Use short paragraphs, bullet points, and bolding to make your key points easy to parse. A well-structured post is not just easier for humans to read; it's easier for a language model to comprehend and evaluate. ○​ Be Deliberate with Your Language: The words you choose matter more than ever. The engine understands semantic nuance. Writing with precision and authority will be interpreted as a signal of expertise. This doesn't mean using overly complex vocabulary; it means using the right vocabulary for your domain clearly and effectively. Learn more about Trust Insights at TrustInsights.ai p. 23​
  • 25.
    The Core Mechanism:In-Context Learning & The Illusion of Live Learning ●​ What happens:​ This is where we must address the most profound and often misunderstood aspect of this new ecosystem. How does the system adapt to your behavior so quickly? The answer is In-Context Learning (ICL), and it works very differently from how you might think.​ The old system achieved "freshness" by constantly updating data points. Specialized systems would run every few minutes or hours to recalculate features like "likes on this post in the last hour." The core ranking model itself was retrained less frequently, perhaps daily. The model’s knowledge was relatively static, but the data fed to it was always fresh.​ The new system inverts this entirely. The core 360Brew model is now the most static part of the equation. Training a 150B+ parameter model is a monumental task, taking weeks or months. The model’s internal knowledge—its frozen weights and parameters—is not updated in real-time. It is not "learning" from your clicks in the sense that a student learns and permanently updates their knowledge.​ Instead, the system's dynamism comes from the prompt itself. The prompt is assembled from scratch, in real-time, for every single ranking calculation. The crucial "Past Interaction Data" section is a live query to a database of your most recent actions. This is how the system adapts.​ Think of 360Brew as a world-class consultant with a fixed, encyclopedic knowledge base. ○​ The Old Way: You would give the consultant a spreadsheet of data (the features) that you updated every hour. The consultant's advice would be fresh because the data was fresh. ○​ The New Way: You give the consultant the same, static encyclopedia of knowledge (the frozen model). But for every single question, you also hand them a one-page, up-to-the-second briefing document (the dynamic prompt) that says, "Here’s what’s happened in the last five minutes." ●​ The consultant's fundamental knowledge doesn't change, but by conditioning their expertise on the immediate context of the briefing document, their answer is perfectly tailored to the present moment. This is In-Context Learning. The model "learns" temporarily, for the duration of a single thought process, from the examples you provide in the prompt. ●​ So what?:​ This has two massive consequences. First, the LinkedIn feed is now hyper-responsive. Your immediate interests, as demonstrated by your very last few actions, can have a significant impact on what you see next. The system is always Learn more about Trust Insights at TrustInsights.ai p. 24​
  • 26.
    trying to modelyour current "session intent." If you spend five minutes engaging with content about product-led growth, the system will instantly prioritize more of that content for you because your actions have rewritten the "Past Interaction Data" for the next prompt.​ Second, your engagement is far more than just a simple signal of approval. Each like, comment, or share is an active contribution to the live briefing document that defines you in that moment. You are not just providing a historical data point for a model to be trained on next week; you are actively feeding the model examples of what you find valuable right now, and it is using those examples to reason about the very next piece of content it shows you. You are, in a very real sense, a co-creator of your own feed's logic. ●​ Now what?:​ Your engagement strategy must become as deliberate and strategic as your content strategy. You are constantly providing the live context that steers the AI. ○​ Engage with Aspiration: This is the most powerful tactic in the new ecosystem. Actively seek out and engage with content from the experts, companies, and communities you want to be associated with. When you leave a thoughtful comment on a post by a leader in your field, you are providing a powerful, in-context example to the AI: "This is the conversation I belong in. Consider me in this context." This action directly influences how the model perceives and ranks content for you and, by extension, how it ranks your content for others in that same context. ○​ Curate Your Context: Your feed is a reflection of the examples you provide. Don't be afraid to use the "I don't want to see this" option or to unfollow connections whose content is not relevant to you. Muting or hiding content is a powerful signal that helps clean up your "Past Interaction Data." This ensures the examples the system learns from are of high quality, leading to a more refined and relevant feed over time. A noisy, unfocused history will lead to noisy, unfocused recommendations. ○​ Warm Up the Engine: Before you post an important piece of content, take 10-15 minutes to "warm up" the system. Engage with several high-quality posts on the same topic as the one you are about to publish. This pre-loads your "Past Interaction Data" with highly relevant, recent examples. It attunes the in-context learning mechanism to your immediate area of focus, effectively telling the system, "Pay attention to this topic right now." This can provide a meaningful edge in the crucial first hour of your post's life, ensuring it's evaluated in the most favorable context possible for your network. Learn more about Trust Insights at TrustInsights.ai p. 25​
  • 27.
    By understanding the360Brew engine—what it is, how it works through dynamic prompting, and its core mechanism of in-context learning—you can finally move beyond the world of algorithmic hacks. You can stop asking "How do I please the machine?" and start asking "How do I best communicate my value?" In this new ecosystem, the answer to both questions is finally, and powerfully, the same. Step 3: The Art of the Prompt - The New "Secret Sauce" We've established that the modern LinkedIn AI runs on prompts. For every ranking decision, the system assembles a unique, detailed briefing document for its 360Brew reasoning engine. This shift from features to prompts is the core of the new paradigm. But this raises a crucial, and surprisingly complex, question: what makes a good prompt? This isn't a trivial matter. When your briefing document can be thousands of words long—containing your entire profile and a long list of recent interactions—its structure becomes just as important as its content. Anyone who has used a tool like ChatGPT knows this intuitively. Asking a question in a clear, well-structured way yields a much better answer than a rambling, disorganized query. For LinkedIn, a system that constructs billions of these prompts every day, this is a multi-million dollar engineering challenge. Their researchers have published detailed studies on a fascinating limitation of all large language models, a problem we can think of as the AI's "attention span." Understanding this limitation is the key to understanding the new "secret sauce" of the platform. It will change how you think about your content, your profile, and even the order of your sentences. This stage of the funnel is all about History Construction—the art of building the most effective prompt to overcome the engine's inherent blind spots. And while you cannot directly write the prompts that are sent to 360Brew—they are assembled programmatically in fractions of a second—you have absolute control over the quality of the raw materials the system uses to build them. The "Lost-in-Distance" Challenge: The AI's Attention Span ●​ What happens:​ Large language models, for all their power, have a cognitive limitation that mirrors Learn more about Trust Insights at TrustInsights.ai p. 26​
  • 28.
    our own. It'sa phenomenon that LinkedIn's researchers, in a paper titled "Lost-in-Distance," have documented in detail.​ Imagine you are given a 50-page report and asked to answer a complex question that requires you to connect two key facts. If those two facts are in the same paragraph on page 2, the task is easy. If one fact is in the executive summary on page 1 and the other is in the final recommendations on page 50, it's still relatively straightforward. But what if one crucial detail is buried in a footnote on page 17, and the other related detail is hidden in a data table on page 42? You'd likely struggle to make the connection. The two pieces of information are simply too far apart—they are "lost in distance" from each other.​ LLMs suffer from the exact same problem. When the system constructs a long prompt containing your profile and a detailed history of your interactions, the model's ability to reason effectively depends on the proximity of relevant information. It is very good at using information at the very beginning of the prompt (your profile) and at the very end (the candidate post it's evaluating). However, its ability to cross-reference and connect two related pieces of information degrades significantly as the distance between them in the prompt increases.​ Let's use a practical example. Suppose the system is trying to decide whether to show you a post about "Sustainable Finance" from a VP at Goldman Sachs. The prompt might contain the following pieces of information about you, scattered among dozens of other interactions: ○​ Near the beginning: Your profile headline says "ESG Investing Professional." ○​ Buried in the middle: You liked a post about "Impact Investing" three weeks ago. ○​ Also in the middle: Your work history shows you once worked at Goldman Sachs. ●​ For an effective prediction, the AI needs to connect all three of these points: you are an ESG professional, you are interested in a related topic, and you have an affinity for the author's company. If these facts are separated by thousands of tokens of other, less relevant activity, the model may fail to connect the dots. Its performance will degrade. The signal will be lost in the noise. ●​ So what?:​ This means that the structure of the prompt is a critical, hidden ranking factor. The way LinkedIn's systems choose to order your interaction history can dramatically impact the outcome of the final ranking. This is no longer a simple chronological feed. It's a carefully curated narrative, assembled in the moment to be as persuasive and easy-to-understand for the 360Brew engine as possible. The system isn't just a data fetcher; it's a programmatic prompt engineer. It actively works to put the most important information "front and center" where the AI is most likely to see and Learn more about Trust Insights at TrustInsights.ai p. 27​
  • 29.
    use it effectively.For you as a creator, this is a profound realization: the system isn't just evaluating your content, it's building a case for it, and the quality of your textual inputs determines how strong that case can be. ●​ Now what?:​ Your job is to make it incredibly easy for the system's prompt engineer to find your most important information and build a compelling case for you. You need to create text that is "prompt-friendly." ○​ Front-load Your Value: This is the most direct application of the "Lost-in-Distance" principle. Place your most important keywords, value propositions, and job titles at the beginning of every text field. This is the information that is most likely to be placed at the top of the prompt's context, where the AI's attention is strongest. ■​ In your Headline: "Data-Driven Marketing Leader | B2B SaaS | AI & Analytics" is better than "Helping Companies Grow with Marketing." The first is dense with key entities placed at the start. ■​ In your About Section: Start with a powerful, one-sentence summary that encapsulates who you are and what you do. "I am a product marketing executive with 15 years of experience leading go-to-market strategies for high-growth SaaS companies" is a perfect opening line. Don't bury your core expertise in the third paragraph behind a lengthy personal story. ■​ In your Posts: Your first sentence is the most valuable real estate you have. It must hook the reader and clearly signal the topic and value of the post. This ensures that even in a truncated view or as an item in a list, the core message is at the "top" of that piece of context. ○​ Create "Dense" Signals of Expertise: Instead of scattering your skills and interests across many low-effort posts, concentrate them. A single, well-written, in-depth article on a niche topic is a much "denser" and more powerful signal than twenty vague, unrelated updates. This creates a strong, self-contained piece of context that the prompt engineer can easily pull and feature as a prime example of your expertise. ○​ Maintain a Coherent Narrative: A profile where the headline, summary, and experience all tell the same professional story is easier for the model to understand. This consistency reduces the cognitive load on the AI, as it doesn't have to reconcile contradictory or widely dispersed signals to figure out who you are. This coherence makes your professional "dossier" much easier to read and interpret. Learn more about Trust Insights at TrustInsights.ai p. 28​
  • 30.
    History Construction: Buildingthe Perfect Briefing, Automatically ●​ What happens:​ Knowing that the 360Brew engine suffers from the "Lost-in-Distance" problem, LinkedIn's systems cannot simply dump a member's entire chronological history into the prompt. Doing so would be inefficient and ineffective. Instead, the system must act as an intelligent, automated editor, programmatically curating a history that is most likely to lead to an accurate prediction.​ This is the core of the system's own "secret sauce." You, the user, cannot directly influence this process. You can't tell the system, "Hey, for this next post, please use a similarity-based history." This all happens behind the scenes, governed by sophisticated engineering and machine learning. Based on the research papers and an understanding of the problem, the system likely employs several automated history construction strategies, choosing the best one for the specific task at hand. ○​ Chronological History: The most straightforward approach is to order your interactions from oldest to newest. This is useful for tasks where understanding your evolving journey or the sequential nature of a conversation is important. However, as we know, this can fall victim to the "Lost-in-Distance" problem if your most relevant information is chronologically old. ○​ Recency-Weighted History: A simple but effective modification is to heavily prioritize your most recent interactions. The system's logic automatically gives far more weight to your activity from the last hour than your activity from last week, placing it more prominently in the prompt. This is the mechanism behind the system's hyper-responsiveness. ○​ Similarity-Based History: This is a much more powerful and computationally intensive strategy. When evaluating a candidate post for you, the system can first perform a quick search of your entire interaction history to find past posts that are most semantically similar to the candidate. It might find three posts you liked on the same topic, or two articles you shared from the same author. The system's logic then takes these highly relevant historical examples and places them at the top of the "Past Interaction Data" section of the prompt. It's like a lawyer's assistant automatically finding and highlighting the three most relevant case precedents for a new legal brief. It directly overcomes the "Lost-in-Distance" problem by programmatically moving the most relevant information close together. Learn more about Trust Insights at TrustInsights.ai p. 29​
  • 31.
    ○​ Priority-Based History:The system understands that not all interactions are created equal. A thoughtful comment is a far stronger signal of interest than a passive view. A "share with comment" is more significant than a simple "like." The system can be programmed with a priority dictionary, ensuring that these high-intent actions are automatically given preferential placement within the prompt, regardless of when they occurred. ●​ In a live production environment, the system is likely using a hybrid of these strategies, with its own models dynamically choosing the best combination based on you, the content, and the specific prediction task. This curation is fully automated. ●​ So what?:​ The personalization you experience on LinkedIn is not a passive reflection of your history; it is an active, curated, and machine-generated interpretation of your history. The system is constantly making automated editorial decisions about what aspects of your professional identity are most relevant in this moment. This means that the quality and clarity of your past actions have a compounding effect. A history filled with clear, high-intent engagements on a coherent set of topics gives the programmatic prompt engineer a wealth of powerful evidence to work with. A history of vague, low-effort, or scattered engagement provides weak evidence, resulting in a less persuasive prompt and, consequently, less relevant recommendations. You control the quality of the ingredients; the system controls the recipe. ●​ Now what?:​ You can't control the automated recipe, but you have 100% control over the quality of the ingredients you provide. Your goal is to fill your historical record with the highest-grade raw materials for the prompt engineer to use. ○​ Prioritize High-Intent Engagement: A single thoughtful comment is worth more than a dozen mindless likes. When you engage, aim for depth. Add value to the conversation. Ask insightful questions. Share a post with your own unique take. These actions are the "priority" ingredients that the history construction algorithm is designed to find and elevate. ○​ Build a Thematically Consistent History: While it's fine to have diverse interests, your core professional engagement should be thematically consistent. If you are an expert in cybersecurity, a significant portion of your high-intent engagement should be on cybersecurity topics. This creates a dense cluster of similar, high-quality interactions, making it easy for the similarity-based history constructor to find powerful examples to put in your prompt's In-Context Learning section. Learn more about Trust Insights at TrustInsights.ai p. 30​
  • 32.
    ○​ Don't JustConsume, Create and Converse: The system is trying to understand you as a professional, and professionals are active participants in their field. A history that only shows you passively "liking" content is less informative than a history that shows you creating content, starting conversations, and adding your voice to existing ones. The latter provides much richer context for the AI to reason with, giving the prompt engineer better material to build its case. Bringing It All Together: A Look Inside the Prompt So far, we've discussed the theory behind the 360Brew engine and the art of automated prompt construction. Now, let's make it concrete. What does one of these briefing documents, assembled in a fraction of a second for a single ranking decision, actually look like? Based on the structure and syntax revealed in LinkedIn's own research papers, we can infer the format. The examples below are fictional, but they are constructed to be faithful to the principles and components we've discussed. They are your first look "under the hood" at the new operating logic of the LinkedIn feed. As you read them, notice how the different textual elements from a member's profile and activity are woven together into a single, coherent document for the AI to analyze. Example 1: Prompt to Predict a "Like" on a Marketing Strategy Post Instruction:​ You are provided a member's profile and a set of posts, their content, and interactions that the member had with the posts. For each past post, the member has taken one of the following actions: liked, commented on, shared, viewed, or dismissed.​ Your task is to analyze the post interaction data along with the member's profile to predict whether the member will like, comment, share, or dismiss a new post referred to as the "Question" post. Note:​ Focus on topics, industry, and the author's seniority more than other criteria. In your calculation, assign a 30% weight to the relevance between the member's profile and the post content, and a 70% weight to the member's historical activity. Learn more about Trust Insights at TrustInsights.ai p. 31​
  • 33.
    Member Profile:​ Current position:Senior Content Marketing Manager, current company: HubSpot, Location: Boston, Massachusetts. Past post interaction data:​ Member has commented on the following posts: [Author: Ann Handley, Content: 'Great content isn't about storytelling; it's about telling a true story well. In B2B, that means focusing on customer success...', Topics: content marketing, B2B marketing]​ Member has liked the following posts: [Author: Christopher Penn, Content: 'Ran the numbers on the latest generative AI model's impact on SEO. The results are surprising... see the full analysis here...', Topics: generative AI, SEO, marketing analytics]​ Member has dismissed the following posts: [Author: Gary Vaynerchuk, Content: 'HUSTLE! There are no shortcuts. Stop complaining and start doing...', Topics: entrepreneurship, motivation] Question:​ Will the member like, comment, share, or dismiss the following post: [Author: Rand Fishkin, Content: 'Everyone is focused on AI-generated content, but the real opportunity is in AI-powered distribution. Here's a framework for thinking about it...', Topics: marketing strategy, AI, content distribution] Answer:​ The member will like Analysis of the Prompt: In this first example, you can see the core components in action. The Member Profile establishes a clear identity ("Senior Content Marketing Manager"). The Past post interaction data provides powerful in-context examples: the member engages with industry leaders (Ann Handley, Christopher Penn) on core topics (content marketing, AI, SEO) but dismisses generic motivational content. The Question presents a post that is a perfect topical and conceptual match. The engine reads this entire narrative and, using its reasoning capabilities, correctly predicts a high-intent action like a "like." Example 2: Prompt to Predict a "Comment" on a Product Marketing Post Instruction:​ You are provided a member's profile and a set of posts, their content, and interactions that the member had with the posts. For each past post, the member has taken one of the Learn more about Trust Insights at TrustInsights.ai p. 32​
  • 34.
    following actions: liked,commented on, shared, viewed, or dismissed.​ Your task is to analyze the post interaction data along with the member's profile to predict whether the member will like, comment, share, or dismiss a new post referred to as the "Question" post. Note:​ Focus on topics, industry, and the author's seniority more than other criteria. In your calculation, assign a 30% weight to the relevance between the member's profile and the post content, and a 70% weight to the member's historical activity. Member Profile:​ Current position: Director of Product Marketing, current company: Salesforce, Location: San Francisco, California. Past post interaction data:​ Member has commented on the following posts: [Author: Avinash Kaushik, Content: 'Most analytics dashboards are data pukes. I'm challenging you to present just ONE metric that matters this week. What would it be?', Topics: data analytics, marketing metrics]​ Member has shared with comment the following posts: [Author: Joanna Wiebe, Content: 'Just released a new case study on how a simple copy tweak increased conversion by 45%. The key was changing the call to value, not call to action...', Topics: copywriting, conversion optimization]​ Member has liked the following posts: [Author: Melissa Perri, Content: 'Product strategy is not a plan to build features. It's a system of achievable goals and visions that work together to align the team around what's important.', Topics: product management, strategy] Question:​ Will the member like, comment, share, or dismiss the following post: [Author: April Dunford, Content: 'Hot take: Most companies get their positioning completely wrong because they listen to their customers instead of observing their customers. What's the biggest positioning mistake you've seen?', Topics: product marketing, positioning, strategy] Answer:​ The member will comment Analysis of the Prompt: This second example illustrates a more nuanced prediction. The historical data shows a pattern of not just liking, but actively commenting and sharing with comment, particularly on posts that ask questions or present strong opinions. The Question itself, from a known Learn more about Trust Insights at TrustInsights.ai p. 33​
  • 35.
    expert (April Dunford),is designed to elicit a response by asking a direct question. The 360Brew engine, by reading this context, can infer that this member's pattern of behavior goes beyond simple approval. It recognizes the prompt for what it is—an invitation to a professional conversation—and correctly predicts the higher-intent action: a "comment." These examples reveal the new reality of LinkedIn. Your success is no longer a game of numbers and signals, but a matter of narrative and context. The strength of the case presented in these prompts is directly determined by the quality of the text you provide in your profile, your content, and your engagement. The following checklists are designed to help you make that case as compelling as possible. Of course. Let's write the final step of the reimagined funnel. This section will cover the crucial "last mile" processes that turn 360Brew's raw predictions into the final feed you see on your screen, integrating important concepts from the old guide that remain highly relevant. Step 4: Finalization, Diversity & Delivery After the 360Brew engine has performed its intensive, prompt-based analysis and returned a relevance score for each of the thousands of candidate posts, the core "thinking" is done. The system now has a ranked list, ordered from what the AI predicts will be most valuable to you down to the least. However, the process is not yet complete. If the system simply took the top-scoring posts and delivered them directly to your screen, the result might be highly relevant, but it could also be monotonous, repetitive, or unbalanced. You might see five posts in a row from the same hyperactive person in your network, or a feed entirely dominated by a single trending topic. A purely relevance-driven feed is not necessarily a healthy or engaging one. This final step is about applying a layer of editorial judgment and platform-wide rules to this ranked list. It’s the stage where the raw, mathematical output of the AI is refined to create a balanced, diverse, and safe user experience. This involves applying final business rules, ensuring feed diversity, and preparing the content for final delivery to your specific device. Many of the principles from the old system's "Re-Ranking & Finalization" stage are still very much alive here, serving as essential guardrails for the powerful new engine. Learn more about Trust Insights at TrustInsights.ai p. 34​
  • 36.
    Applying Final BusinessRules: The Platform Guardrails ●​ What happens:​ Before the feed is shown, the top-ranked list of posts from 360Brew goes through a final, rapid series of automated checks. These are not about re-evaluating relevance but about enforcing platform-wide business rules and policies. This is a critical layer of governance that ensures the feed adheres to both community standards and a good user experience.​ This stage includes several key filters: ○​ Trust & Safety Moderation: This is the most important guardrail. Every piece of content is checked against LinkedIn's professional community policies. Automated systems, and in some cases human reviewers, work to identify and remove content that violates these policies, such as misinformation, hate speech, or spam. Even if a post scores highly for relevance with 360Brew, it will be removed at this stage if it is flagged by the Trust & Safety systems. ○​ Impression Discounting: The system keeps a memory of what you've recently seen. If you've already seen a particular post (i.e., it was rendered on your screen during a previous session), its score will be heavily discounted or it will be removed entirely from the list for your next feed refresh. This is to prevent you from seeing the same content over and over again. ○​ Frequency Capping (Anti-Gaming Rules): This is a crucial rule to prevent a single person or topic from dominating your feed. The system applies rules like, "Do not show a member more than X posts from the same author in a single feed session," or "Ensure there is a minimum gap between posts on the same viral topic." This prevents your feed from being flooded by a single, prolific creator or a single news event, even if their individual posts are all scoring highly. ○​ Block Lists & Mutes: This filter respects your personal preferences. If you have blocked a member, muted them, or unfollowed them, their content is explicitly removed from your feed at this stage, regardless of its relevance score. ●​ So what?:​ This means that pure, raw relevance is not the only factor that determines what you see. LinkedIn actively intervenes to shape the final feed for health, safety, and a good user experience. The platform is making an editorial judgment that a balanced and safe feed is more valuable in the long run than a feed that is simply a firehose of the highest-scoring content. This also means there are hard limits to visibility. No Learn more about Trust Insights at TrustInsights.ai p. 35​
  • 37.
    matter how greatyour content is, you cannot brute-force your way into a user's feed ten times in a row. The system is explicitly designed to prevent that. ●​ Now what?:​ While you cannot directly influence these rules, you can align your strategy with their intent, which is to foster a healthy and diverse professional community. ○​ Post Consistently, Not Repetitively: Maintain a good posting cadence to stay top-of-mind, but avoid posting so frequently that you trigger frequency caps for your most engaged followers. Blasting out five posts in a single hour is more likely to get your later posts suppressed than to increase your overall reach. Space out your valuable content. ○​ Vary Your Content: If you post often, try to vary your topics and formats. This not only keeps your content fresh for your audience but also makes it less likely to be flagged by anti-gaming rules designed to prevent repetitive content. A mix of text posts, articles, videos, and shares is healthier than a monolithic stream of the same type of update. ○​ Play the Long Game: Understand that the system is designed to provide a good experience over weeks and months, not just in a single session. Building a loyal following who finds your content consistently valuable is a more durable strategy than trying to create a single viral hit that might get throttled by the system's guardrails anyway. ○​ Always Adhere to Professional Community Policies: This should go without saying. The fastest way to have zero visibility is to create content that violates LinkedIn's rules. Professionalism, respect, and authenticity are the price of admission. Ensuring Feed Diversity: From Manual Rules to Automated Curation ●​ What happens:​ Beyond the hard-coded business rules, the system also works to ensure the feed is topically and structurally diverse. The old system accomplished this primarily through rigid, rule-based re-rankers. For example, a rule might have stated, "Ensure a minimum gap of two items between any 'out-of-network' posts."​ The new ecosystem, powered by models like 360Brew and its predecessors like LiGR, can handle this in a much more intelligent and automated way. The latest research papers describe a move towards setwise ranking.​ Instead of evaluating each post in isolation (pointwise ranking), a setwise model Learn more about Trust Insights at TrustInsights.ai p. 36​
  • 38.
    looks at thetop-ranked posts as a group, or a "set." It can see the top 10 or 20 posts that are likely to be shown and can ask questions like: ○​ "Are too many of these posts from the same author?" ○​ "Are all of these posts about the same trending topic?" ○​ "Does this set contain a good mix of content formats (text, video, articles)?" ●​ The model can then adjust the scores, perhaps down-ranking a post that is too similar to another, higher-ranked post, or boosting a post that adds unique value or a different perspective to the set. This allows the system to learn what a "good" slate of content looks like for each member, rather than relying on one-size-fits-all rules. For example, it might learn that you prefer to see a few posts about your core industry, followed by one about a secondary interest, and another that is a poll or a question to engage with. It can then curate the feed to match this learned preference for diversity. ●​ So what?:​ This means that the success of your post can be affected by the other content that is ranking highly for a member at that moment. Even if your post scores highly on its own, its chances can be boosted or reduced based on the context of the entire feed session. Uniqueness and complementary value matter. If ten other experts in your field have just posted about the same breaking news, your own post on that topic, even if it's excellent, might be down-ranked for a particular user in favor of a post on a different, valuable topic. Conversely, if your post offers a unique angle or covers an underserved topic, it might be boosted to add diversity to the feed. ●​ Now what?:​ You can't control what other content is ranking, but you can control the uniqueness and value proposition of your own content. ○​ Offer a Unique Angle: When commenting on a trending topic, don't just regurgitate the same talking points. Try to provide a unique perspective, a piece of data no one else has, or a contrarian viewpoint. This makes your content a "diversity candidate," increasing its chances of being selected to balance a feed that might otherwise be monotonous. ○​ Develop Your Niche: As discussed before, focusing on a specific niche is a powerful strategy. It not only helps you build a dedicated audience but also makes your content a valuable source of diversity for the system. Your deep expertise on a specific topic is a unique asset that the setwise ranker can use to create a more interesting and valuable feed for members interested in that niche. ○​ Consider the "Feed Mix": While you can't predict the feed, be aware of the general conversations happening in your industry. If everyone is talking about Topic A, that might be the perfect time to publish your thoughtful Learn more about Trust Insights at TrustInsights.ai p. 37​
  • 39.
    piece on TopicB. Your content might stand out not just to users, but to the setwise ranker itself. Delivery: Formatting for the Final Destination ●​ What happens:​ In the final micro-seconds of the process, the curated, ranked, and finalized list of posts is handed over to the delivery systems. This stage is responsible for formatting the content for your specific device—whether it's a web browser on a large monitor, an iOS app, or an Android device. Specialized "Render Models" take the raw content and prepare it for display, ensuring that text wraps correctly, images are sized appropriately, and videos are ready to play. The formatted feed is then sent to your device and rendered on your screen. The content has made it! ●​ So what?:​ The system is optimizing not just for relevance, but for a good consumption experience on every platform. This is a subtle but important final step. A post that is difficult to read on a mobile device, for example, will likely have lower engagement, which feeds back into the system as a negative signal over time. ●​ Now what?:​ This is the simplest step to align with, but one that is often overlooked. Always design your content to be easily consumable on mobile devices, as this is where a majority of users interact with the feed. ○​ Use Short Paragraphs: Break up large walls of text. One or two sentences per paragraph is a good rule of thumb. ○​ Check Your Visuals: If you are creating an image or a carousel with text, make sure the font is large and legible enough to be read easily on a small phone screen. ○​ Write Concise Video Hooks: The first few seconds of a video are critical. On mobile, users scroll quickly. Your opening must grab their attention immediately, with or without sound (so use captions!). From the raw power of 360Brew's predictions to the final, refined list that appears on your phone, this finalization stage is a crucial part of the process. It's where the platform's broader goals—community health, user experience, and safety—are layered on top of pure relevance. By understanding and aligning with these goals, you move from simply creating content to being a valuable and trusted contributor to the entire professional ecosystem. Learn more about Trust Insights at TrustInsights.ai p. 38​
  • 40.
    Learn more aboutTrust Insights at TrustInsights.ai p. 39​
  • 41.
    LinkedIn Profile Checklistfor Marketers & Creators The previous sections have detailed the profound shift in how LinkedIn's AI works. We've moved from a world of numerical signals to a world of natural language, from a feature factory to a reasoning engine. Now, we translate that understanding into action. These checklists have been completely revised to align with this new paradigm. They are your practical, step-by-step guides to providing the highest-quality raw materials for the AI's prompt engineering system. The new guiding principle is simple: Communicate your value with clarity, because a powerful AI is now your primary audience. New Guiding Principle: Your profile is no longer just a source for abstract features; it is the raw, foundational text that forms the "Member Profile" section of the AI's prompt. Every ranking decision for or against your content begins with the AI reading this document. Because of the "Lost-in-Distance" challenge, the information at the top of your profile—your photo, background, and especially your headline—is the most influential. A clear, compelling, and keyword-rich narrative in this section directly and powerfully impacts the AI's understanding of who you are, what you know, and who needs to see your content. 1. Profile Photo & Background Photo ●​ Why it Matters in the 360Brew Era: While the language model itself doesn't "see" your photo in the traditional sense, these visual elements are crucial trust and engagement signals for the humans who ultimately interact with your content. The 360Brew engine is designed to predict human behavior. A profile with a professional, high-quality photo is more likely to be trusted and engaged with by other members. This positive human engagement then becomes powerful "Past Interaction Data" that feeds the In-Context Learning mechanism. A strong photo leads to better human signals, which in turn leads to a stronger case in future prompts. ●​ What to do: Use a clear, professional headshot and a relevant, high-quality background photo. ●​ How to do it: ○​ Profile Photo: Learn more about Trust Insights at TrustInsights.ai p. 40​
  • 42.
    ■​ Use ahigh-resolution, well-lit photo where your face is clearly visible. ■​ Dress professionally, consistent with your industry and role. ■​ Ensure the background is simple and not distracting. ■​ Use a real photo. Systems are increasingly adept at detecting AI-generated or fake images, which can be a negative trust signal. ○​ Background Photo: ■​ Use a high-quality image (1584 x 396 pixels is ideal). ■​ Reflect your personal brand, company, industry, or a key professional achievement. ■​ If you use text, ensure it's legible on both desktop and mobile devices without being cut off. 2. Headline ●​ Why it Matters in the 360Brew Era: Your headline is the single most important line of text on your profile. It is the title of your professional dossier. Due to the "Lost-in-Distance" effect, information at the top of the context has the most weight. Your headline is almost certainly the first piece of your profile that is rendered into the Member Profile section of the prompt. It sets the entire context for how the AI interprets everything else about you. A powerful headline primes the model to see you as an expert. ●​ What to do: Craft a concise, keyword-rich headline (up to 220 characters) that clearly states who you are, what you do, and the value you bring. ●​ How to do it: ○​ Front-load Your Keywords: Place your 2-3 most important keywords or titles at the very beginning. "B2B SaaS Content Strategist | AI in Marketing" is immediately understood. ○​ State Your Value Proposition: Briefly explain the problem you solve or the value you create. Example: "Helping enterprise tech companies build their content engine." This gives the language model rich, conceptual context. ○​ Use the Language of Your Audience: Think about the terms your ideal connections or clients would search for. Use that language in your headline. This helps the Cross-Domain GNN in the Candidate Generation stage connect you to the right "graph neighborhood." Learn more about Trust Insights at TrustInsights.ai p. 41​
  • 43.
    ○​ Keep itUpdated: If your professional focus or key skills shift, update your headline immediately. It's the most influential part of your real-time professional identity. 3. About (Summary) Section ●​ Why it Matters in the 360Brew Era: If your headline is the title, your About section is the executive summary of your professional dossier. This is the largest block of narrative text the model has to learn from. It reads this section to understand the story behind your skills, the context of your achievements, and your professional "why." A well-written summary provides a rich, conceptual understanding that goes far beyond simple keywords, allowing the model to make more nuanced and accurate connections. ●​ What to do: Write a compelling, detailed summary that tells your professional story, weaving in your key skills, achievements, and goals naturally. ●​ How to do it: ○​ Start with a Strong Opening Paragraph: Just like your headline, front-load the value. Your first paragraph should summarize your core expertise and value proposition. ○​ Tell a Story with Keywords: Don't just list skills. Weave them into the narrative of your accomplishments. Instead of "Skills: SEO," write "I led the SEO strategy that resulted in a 300% increase in organic traffic for our flagship product." The model understands and values context and results. ○​ Quantify Your Achievements: Numbers are a universal language, even for an LLM. Quantifying your accomplishments ("managed a team of 10," "grew revenue by $5M") provides concrete, verifiable data points that signal impact and credibility. ○​ Mention Key "Entities": Naming notable companies you've worked with, technologies you've used, or significant projects you've led helps the system link your profile to other important nodes in the Economic Graph. 4. Experience Section ●​ Why it Matters in the 360Brew Era: The Experience section is the evidence that backs up the claims in your headline and summary. Each job description is parsed Learn more about Trust Insights at TrustInsights.ai p. 42​
  • 44.
    as text, providingthe model with a chronological narrative of your career progression and the specific context in which you applied your skills. This detailed history allows the model to reason about the depth of your expertise. ●​ What to do: Detail each role with achievement-oriented descriptions, using industry-standard language and keywords. ●​ How to do it: ○​ Link to Official Company Pages: Always link your role to the correct, official LinkedIn Company Page. This creates a clean, unambiguous link in the Economic Graph. ○​ Use Precise Titles and Dates: Use your exact job title and accurate employment dates. This helps the model build a clear timeline of your career trajectory. ○​ Focus on Achievements, Not Just Responsibilities: Use bullet points to describe your accomplishments in each role. Instead of "Responsible for social media," write "Grew our social media following by 50,000 and increased engagement by 25% in one year." Use the STAR method (Situation, Task, Action, Result) to frame your accomplishments. This provides rich, structured information that the AI can easily parse. ○​ Embed Relevant Skills in Each Role: Naturally weave the specific skills and keywords relevant to each job into its description. This shows the model when and where you applied your expertise. 5. Skills Section (Endorsements & Skill Badges) ●​ Why it Matters in the 360Brew Era: The Skills section provides structured, verifiable data points that complement the narrative of your profile. While 360Brew is language-first, it still benefits from these explicit signals. Endorsements from other skilled professionals and Skill Badges from LinkedIn assessments serve as powerful, third-party validation of your claims. This is corroborating evidence for the AI. ●​ What to do: Curate a comprehensive list of your most relevant skills, seek endorsements for them, and complete LinkedIn Skill Assessments where possible. ●​ How to do it: ○​ Pin Your Top 3 Skills: Place your most critical, relevant skills at the top so they are immediately visible. Learn more about Trust Insights at TrustInsights.ai p. 43​
  • 45.
    ○​ Use StandardizedSkill Terms: As you type, LinkedIn will suggest standardized skills. Use them. This maps your profile cleanly to the canonical "Skills" nodes in the Economic Graph. ○​ Seek Strategic Endorsements: Ask connections who have direct knowledge of your work to endorse your key skills. An endorsement from another expert in that same skill carries more weight. ○​ Earn Skill Badges: Passing a LinkedIn Skill Assessment adds a "verified" credential to your profile. This is a very strong, credible signal to both humans and the AI. 6. Recommendations ●​ Why it Matters in the 360Brew Era: Recommendations are the qualitative, third-party testimonials in your professional dossier. The 360Brew engine reads the full text of recommendations you've given and received. A well-written recommendation from a respected person in your network provides powerful social proof and rich semantic context about your skills and work ethic. The identity of the recommender also strengthens your connection to them in the Economic Graph. ●​ What to do: Request and give thoughtful, specific recommendations that highlight key skills and impactful achievements. ●​ How to do it: ○​ Guide Your Recommenders: When asking for a recommendation, don't just send a generic request. Politely suggest the specific project or skills you'd like them to highlight. ○​ Give Detailed, Valuable Recommendations: When recommending others, be specific. Mention the context of your work together, the skills they demonstrated, and the impact of their contribution. This not only helps them but also reflects positively on you as a thoughtful professional. 7. Education, Honors & Awards, Certifications, etc. ●​ Why it Matters in the 360Brew Era: These sections provide additional structured entities and keywords that enrich your profile's context. They are the credentials and accolades that round out your professional story. A certification from a recognized body (like Google, HubSpot, or PMI) or an award from a respected industry organization adds verifiable credibility. The AI can recognize these entities and understands the weight they carry. Learn more about Trust Insights at TrustInsights.ai p. 44​
  • 46.
    ●​ What todo: Thoroughly complete all relevant sections with accurate, specific, and official information. ●​ How to do it: ○​ Be Comprehensive: List your relevant degrees, certifications, publications, patents, and awards. ○​ Use Official Names: Use the exact official names for institutions, certifications ("Project Management Professional (PMP)"), and publications. Link to the issuing organization where possible. ○​ Use Description Fields: If a description field is available, use it to add context and relevant keywords. Explain what the project was about or what you learned in the certification course. By meticulously optimizing these sections, you are not just filling out a form. You are authoring the foundational document that the world's most advanced professional reasoning engine will use to understand you. You are crafting the narrative that becomes the very first part of every prompt, setting the stage for every ranking decision to come. Learn more about Trust Insights at TrustInsights.ai p. 45​
  • 47.
    LinkedIn Content Pre-LaunchChecklist for Creators New Guiding Principle: Your content is the "Question" the AI is asked to evaluate. Every time your post is considered for someone's feed, it becomes the central subject of a detailed, dynamically-generated prompt. The 360Brew reasoning engine reads your text from top to bottom, analyzing its quality, clarity, and conceptual relevance. It then compares this "Question" against the "Member Profile" and "Past Interaction Data" to predict a reaction. The system is performing a sophisticated act of matchmaking, attempting to align the language, concepts, and ideas in your content with the demonstrated interests and expertise of each member. Creating content that is easy for a powerful AI to understand, contextualize, and see value in is the new key to visibility. I. Before You Post: Content Strategy & Creation This phase is about making sure the "Question" you're about to ask the AI is a good one. A muddled, low-value, or poorly targeted post is like asking a nonsensical question—it's unlikely to get a favorable response. 1. Topic Selection & Conceptual Alignment ●​ Why it Matters in the 360Brew Era: The 360Brew engine thinks in concepts, not just keywords. It understands the semantic relationships between topics. For example, it knows that "go-to-market strategy," "product-led growth," and "customer acquisition cost" are all related concepts within the domain of B2B marketing. Selecting a topic that aligns with your core expertise (as stated in your profile) and the interests of your target audience creates a powerful "conceptual resonance." When the AI reads a prompt where the concepts in your Profile, the member's History, and your new Content all align, it's a very strong signal of relevance. ●​ What to do: Strategically choose topics that create a strong conceptual link between your established expertise and your audience's needs. ●​ How to do it: ○​ Identify Audience Pain Points: What are the key challenges, questions, and goals of your target audience? Frame your topics around providing solutions, insights, or new perspectives on these specific pain points. ○​ Find Your Niche Intersection: The most powerful content lives at the intersection of three things: Your deep expertise, your audience's needs, and Learn more about Trust Insights at TrustInsights.ai p. 46​
  • 48.
    a unique perspective.Don't just talk about "AI in Marketing." Talk about "How Mid-Sized B2B SaaS Companies Can Use AI to Automate Competitive Analysis." This specificity is easily understood by the reasoning engine. ○​ Align with Your Profile: The topics of your posts should be a direct reflection of the expertise you claim in your headline and About section. If your profile says you're a cybersecurity expert, your content should be about cybersecurity. This consistency creates a coherent narrative that the AI can easily understand and trust. 2. Content Format Selection ●​ Why it Matters in the 360Brew Era: Different formats are optimized to generate different types of engagement signals, which in turn become different types of "Past Interaction Data." The system learns which formats your audience prefers and which formats are best for certain topics. A video, for example, is excellent for generating "long dwell time," while a poll is designed for rapid, low-friction interaction. Choosing the right format for your message helps you elicit the type of engagement that best signals value. ●​ What to do: Choose a content format that best suits your message and is known to engage your target audience. Experiment to see what resonates. ●​ How to do it: ○​ Text Posts: Ideal for focused insights, asking questions, or starting discussions. Because the text is the primary input for the LLM, well-written, well-structured text posts are incredibly powerful. ○​ Articles/Newsletters: Best for establishing deep expertise. The long-form text provides a rich, dense source of conceptual information for the AI. A high-quality article becomes a cornerstone piece of evidence for your authority on a topic. ○​ Images/Carousels: Excellent for making complex information digestible. Use high-quality visuals and ensure any text is legible on mobile. Provide descriptive alt-text and a strong introductory paragraph; this text is the primary context the AI will read. ○​ Native Video: Great for building personal connection and capturing attention. Keep them concise and add captions. The system can process the transcript of your video, so what you say is just as important as what you show. Learn more about Trust Insights at TrustInsights.ai p. 47​
  • 49.
    ○​ Polls: Perfectfor generating quick, broad engagement. While a lower-intent signal, a successful poll can significantly increase your content's initial velocity, helping it pass the Candidate Generation stage. 3. Crafting High-Quality, Engaging Content ●​ Why it Matters in the 360Brew Era: This is the most critical step. The 360Brew engine is, at its core, a language model. It is trained to recognize and value high-quality, well-structured, and coherent text. Typos, grammatical errors, rambling sentences, and logical fallacies are not just cosmetic issues; they are signals of low-quality content that the model can now detect. A well-argued, insightful, and clearly written post is inherently optimized for a system designed to understand language. ●​ What to do: Create content that is valuable, insightful, well-structured, and encourages meaningful interaction. Write for an intelligent human, and you will be writing for the AI. ●​ How to do it: ○​ Hook Attention Immediately: The "Lost-in-Distance" challenge applies to your content, too. The first sentence of your post is the most important. It must grab attention and clearly state the value proposition to prevent a "scroll-past" (which is a negative signal). ○​ Structure Your Argument: Use formatting—bolding, bullet points, short paragraphs—to structure your content. This makes it easier for both humans and the AI to parse your main points and follow your logic. ○​ Provide Genuine Value First: Your primary goal should be to educate, inform, or inspire. Authentic, valuable content tends to resonate more deeply and generate higher-intent engagement signals (comments, shares). ○​ Encourage Discussion: End your posts with an open-ended question. This explicitly invites comments. When a member comments, their response and your subsequent reply create a valuable "conversation thread" that signals to the system that your content is fostering a meaningful discussion. ○​ Proofread Meticulously: A post riddled with errors is a signal of low quality. Use a grammar checker or have a colleague review your content before posting. Professionalism in your prose matters. Learn more about Trust Insights at TrustInsights.ai p. 48​
  • 50.
    II. As YouPost: Optimizing for Discovery & Initial Engagement This phase is about ensuring your well-crafted content is packaged correctly for the system, making it as easy as possible for the prompt engineer to understand its context and for the GNN to connect it to the right audience. 4. Writing Compelling Copy & Headlines ●​ Why it Matters in the 360Brew Era: The text of your post (and the headline for an article) is the literal Question being fed to the 360Brew engine. Clear, engaging copy with relevant conceptual language not only attracts human attention but also makes the AI's comprehension task easier and more accurate. A strong opening stops the scroll, influencing implicit signals like dwell time. ●​ What to do: Craft clear, concise, and compelling text that includes relevant concepts and encourages viewers to engage further. ●​ How to do it: ○​ Strong Opening: Make the first one or two sentences captivating. They should summarize the core value and create curiosity. ○​ Incorporate Concepts Naturally: Weave in the 1-3 primary concepts your audience would associate with the topic. Don't "keyword stuff"; think about expressing the core ideas. Instead of listing "SEO, SEM, PPC," write about "building a holistic search engine presence." The model understands the connection. ○​ Clear Call to Action (CTA): What do you want people to do? A direct CTA like "What are your thoughts?" or "Share your experience in the comments" explicitly frames the post as a conversation starter. 5. Strategic Use of Hashtags ●​ Why it Matters in the 360Brew Era: Hashtags are explicit categorization signals. They are structured metadata that helps the Candidate Generation GNN quickly understand the primary topic of your post and connect it to broader conversations and interest groups. While 360Brew can infer topics from your text, hashtags provide a clean, unambiguous signal that removes any guesswork. ●​ What to do: Use a small number of highly relevant hashtags that mix broad and niche topics. ●​ How to do it: ○​ Use 3-5 Relevant Hashtags: This is generally a good range. Too many can look spammy and dilute the signal. Learn more about Trust Insights at TrustInsights.ai p. 49​
  • 51.
    ○​ Mix Broadand Niche: Use one or two broad hashtags (e.g., #marketing, #leadership) for wider discovery and two or three niche hashtags (e.g., #productledgrowth, #b2bsaas) to attract a more specific, high-intent audience. ○​ Avoid Irrelevant Hashtags: Using a popular but irrelevant hashtag to try and "hack" reach is now more likely to harm you. The language model can see the mismatch between your content's text and the hashtag, which can be interpreted as a low-quality or spam signal. 6. Tagging Relevant People & Companies (When Appropriate) ●​ Why it Matters in the 360Brew Era: Tagging is another form of explicit, structured metadata. It creates a direct "edge" in the Economic Graph between your post and the person or company you tag. This strengthens the signals for the Candidate Generation GNN, potentially increasing your post's reach into the tagged entity's network. It also triggers a notification, encouraging initial engagement. ●​ What to do: Tag individuals or companies only when they are genuinely relevant to the content. ●​ How to do it: ○​ Relevance is Key: Tag people you are referencing, quoting, or collaborating with. Tag companies you are analyzing or celebrating. Do not tag a list of 20 influencers just for visibility. This is perceived as spam by both users and the system. ○​ Notify & Engage: If you tag someone, they are notified. This is a powerful way to spark initial engagement if they find the content valuable and relevant, which in turn can bootstrap the post's velocity. III. After You Post: Fostering Engagement & Learning This phase is about capitalizing on the initial visibility your post receives and feeding the best possible signals back into the system's learning loop. 7. Engaging with Comments Promptly & Thoughtfully ●​ Why it Matters in the 360Brew Era: Comments are one of the most powerful forms of "Past Interaction Data." When someone comments on your post, and you reply, you are creating a rich, conversational thread. The text of this entire conversation Learn more about Trust Insights at TrustInsights.ai p. 50​
  • 52.
    can become contextin future prompts. It signals to the system that your content is not a monologue but a catalyst for valuable professional discussion. This is a very high-quality signal. ●​ What to do: Monitor your posts and respond to comments in a timely and thoughtful manner. ●​ How to do it: ○​ Acknowledge All Comments: Even a simple "Thanks for sharing your perspective!" can be valuable. ○​ Answer Questions: If people ask questions, provide helpful, detailed answers. This further demonstrates your expertise. ○​ Ask Follow-up Questions: Keep the conversation going. Your replies are as much a part of the content as your original post. ○​ Foster Respectful Debate: If there are differing opinions, facilitate a professional and respectful discussion. A healthy debate is a sign of a highly engaging post. By following this comprehensive checklist, you are systematically creating and packaging your content in a way that is perfectly aligned with how a large language model thinks. You are making it easy for the AI to understand your expertise, see the value in your content, and match it with the right audience. Learn more about Trust Insights at TrustInsights.ai p. 51​
  • 53.
    LinkedIn Engagement Checklistfor Marketers and Creators New Guiding Principle: Your activity (likes, comments, shares) is the raw material for the "Past Interaction Data" section of the AI's prompt. Every engagement you make is no longer just a passive vote; it is an active contribution to the live, personalized briefing document that the 360Brew engine reads to understand you. Strategic engagement is the act of deliberately curating this data set. You are providing the real-time examples that the model uses for In-Context Learning, effectively teaching it what you value, who you are, and what conversations you belong in. A high-quality engagement history leads to a powerful, persuasive prompt and, consequently, a more relevant and valuable feed experience. I. Quick Daily Engagements (5-15 minutes per day) These are small, consistent actions that keep your "Past Interaction Data" fresh and aligned with your goals. Think of this as the daily maintenance of your professional identity signal. 1. Reacting Strategically to Relevant Feed Content ●​ Why it Matters in the 360Brew Era: Each reaction (Like, Celebrate, Insightful, etc.) you make is an explicit data point that gets logged and is eligible for inclusion in future prompts. When the prompt engineer assembles your Past Interaction Data, it might include a line like: "Member has liked the following posts: [Content of Post X]..." A reaction is a direct, unambiguous way of telling the system, "This content is relevant to me." Reacting to content from your target audience or on your core topics reinforces your position within that "conceptual neighborhood," strengthening the signals for both the Candidate Generation GNN and the 360Brew reasoning engine. ●​ What to do: Quickly scan your feed and thoughtfully react to 3-5 posts that are highly relevant to your expertise, industry, or target audience. ●​ How to do it: ○​ Prioritize Relevance over Volume: Focus on reacting to posts from key connections, industry leaders, and on topics central to your brand. A single reaction on a highly relevant post is a better signal than 20 reactions on random content. Learn more about Trust Insights at TrustInsights.ai p. 52​
  • 54.
    ○​ Use DiverseReactions for Nuance: Don't just "Like" everything. Using "Insightful" on a data-driven post or "Celebrate" on a colleague's promotion provides a richer, more nuanced signal. While it's not explicitly stated how each reaction is weighted, it provides more detailed semantic information for the model to potentially learn from. ○​ Avoid Indiscriminate Reacting: Mass-liking dozens of posts in a few minutes can dilute the signal of your true interests. It creates a noisy "Past Interaction Data" set, making it harder for the prompt engineer to identify what you genuinely find valuable. Be deliberate. 2. Brief, Insightful Comments on 1-2 Key Posts ●​ Why it Matters in the 360Brew Era: A comment is one of the most powerful signals you can create. It is a high-intent action that generates rich, textual data. When you comment, two things happen: ○​ Your action is logged for your own Past Interaction Data: "Member has commented on the following posts: [Content of Post Y]..." ○​ The text of your comment itself becomes associated with your professional identity. The 360Brew engine can read your comment and use its content to refine its understanding of your expertise and perspective.​ Leaving a relevant, insightful comment on another expert's post is like co-authoring a small piece of content with them. It explicitly links your identity to theirs in a meaningful, conceptual way. ●​ What to do: Identify 1-2 highly relevant posts in your feed and add a brief, thoughtful comment that contributes to the discussion. ●​ How to do it: ○​ Add Value, Don't Just Agree: Instead of just writing "Great post!", expand on a point, ask a clarifying question, or share a brief, related experience. This provides unique text for the AI to analyze. ○​ Use Relevant Concepts Naturally: Your comment text becomes a signal of your expertise. If you're a cybersecurity expert, commenting with insights about "zero-trust architecture" on a relevant post reinforces your authority on that topic. ○​ Be Timely: Commenting on fresher posts often yields more visibility and is more likely to be part of the "recency-weighted" history construction for others who see that post. ○​ Keep it Professional and Constructive: Your comments are a permanent part of your professional record, readable by both humans and the AI. Learn more about Trust Insights at TrustInsights.ai p. 53​
  • 55.
    II. Focused Daily/RegularEngagements (15-30 minutes per day/several times a week) These activities require a bit more effort but create stronger, more durable signals that can significantly shape the AI's perception of your professional identity. 3. Participating Actively in 1-2 Relevant LinkedIn Groups ●​ Why it Matters in the 360Brew Era: Group activity is a powerful signal of deep interest in a specific niche. Your interactions within a group—the posts you share, the questions you answer—provide a concentrated stream of topically-aligned "Past Interaction Data." This makes it incredibly easy for the similarity-based history constructor to find strong, relevant examples. For the AI, your active participation in "The Advanced Product Marketing Group" is a powerful piece of evidence that you are, in fact, an expert in product marketing. ●​ What to do: Identify and actively participate in 1-2 LinkedIn Groups that are highly relevant to your industry, expertise, or target audience. ●​ How to do it: ○​ Share Valuable Content: Post relevant articles, insights, or questions within the group. This establishes you as a contributor. ○​ Engage with Others' Posts: Like, comment, and answer questions in group discussions. This creates a rich trail of high-intent, topically-focused engagement signals. ○​ Choose Active, Well-Moderated Groups: The quality of the conversation matters. A well-moderated group provides higher-quality context for the AI to learn from. 4. Sending Personalized Connection Requests ●​ Why it Matters in the 360Brew Era: Expanding your relevant network strengthens your position in the Economic Graph, which is a key input for the Candidate Generation GNN. An accepted connection request is a strong positive signal. A personalized request is more likely to be accepted and can even become a piece of textual data itself (though it's private). More importantly, the people you connect with become a primary source of content and context. Engaging with their content is what builds out your Past Interaction Data. Learn more about Trust Insights at TrustInsights.ai p. 54​
  • 56.
    ●​ What todo: Send a few targeted, personalized connection requests each week to individuals relevant to your professional goals. ●​ How to do it: ○​ Always Add a Personal Note: Explain why you want to connect. Reference a shared interest, a recent post they wrote, or a mutual connection. This dramatically increases the acceptance rate. ○​ Focus on Mutual Value: Think about what value the connection might bring to them as well. Networking is a two-way street. ○​ Connect with People Who Engage with Your Content: If someone consistently likes or comments on your posts, they are an ideal candidate for a connection request. They have already demonstrated an interest in your expertise. III. More Involved Weekly/Bi-Weekly Engagements (30-60+ minutes per session) These are high-effort, high-impact activities that create cornerstone assets for your professional identity. They provide the richest, densest sources of textual data for the 360Brew engine to analyze. 5. Writing and Publishing LinkedIn Articles or Newsletters ●​ Why it Matters in the 360Brew Era: A long-form article or newsletter is the ultimate high-quality data source. The 360Brew engine is a language model; giving it a well-structured, 1,000-word article on your core area of expertise is like handing it a detailed research paper for its dossier on you. This becomes a powerful, permanent "node" in the Economic Graph that is rich with conceptual information. When the system evaluates your future, shorter posts, it can reference its deep understanding of your expertise from your articles. A successful newsletter also attracts subscribers, a very strong signal of audience validation. ●​ What to do: If you have in-depth insights to share, consider publishing LinkedIn Articles or starting a Newsletter on a topic relevant to your expertise and target audience. ●​ How to do it: ○​ Choose a Niche Focus: Consistency is key. A newsletter that consistently delivers value on a specific topic will build a loyal audience and create a coherent body of work for the AI to analyze. Learn more about Trust Insights at TrustInsights.ai p. 55​
  • 57.
    ○​ Provide SubstantialValue: Articles should offer deep insights, comprehensive guides, or unique perspectives. This is your chance to prove your expertise, not just state it. ○​ Optimize for Readability: Use headings, subheadings, bullet points, and images to break up the text. ○​ Engage with Comments: Foster a discussion on your published pieces. The conversation in the comments is an extension of the article itself. 6. Reviewing and Endorsing Skills for Connections ●​ Why it Matters in the 360Brew Era: Endorsing a skill for a connection is a structured data signal that reinforces the Economic Graph. While primarily benefiting the person you endorse, this reciprocal activity also signals your own areas of expertise and your engagement within your professional community. It tells the system, "I am a professional in this domain, and I am qualified to validate the skills of others." It's a subtle but valuable form of demonstrating your own standing. ●​ What to do: Periodically review connection profiles and endorse skills for which you can genuinely vouch. ●​ How to do it: ○​ Be Authentic: Only endorse skills you know the person possesses. ○​ Focus on Key Skills: Prioritize endorsing the most relevant and important skills for your connections. ○​ Reciprocity Often Occurs: Connections you endorse may be more likely to endorse you back, further strengthening your own profile. By consistently applying these engagement strategies, you are actively and deliberately curating the data set that defines you to the LinkedIn AI. You are moving from being a passive subject of an algorithm to an active participant in a conversation with a reasoning engine. This isn't about "being active" for the sake of it; it's about strategic, relevant, and valuable interactions that provide the clearest possible context for the AI to understand your professional identity and amplify your voice. Learn more about Trust Insights at TrustInsights.ai p. 56​
  • 58.
    LinkedIn Newsfeed Technologies Thissection provides a granular, technical outline of the LinkedIn newsfeed generation architecture, synthesized from publicly available research papers and engineering blogs. It details the specific systems, models, and data flows involved in the end-to-end process, from offline model training to real-time content delivery. I. Offline Ecosystem: AI Asset Generation & Training The offline ecosystem is responsible for all large-scale data processing and model training. It operates on a cadence of hours to months, producing the versioned AI models and embeddings that are consumed by the online serving systems. ●​ A. Pipeline Orchestration & Execution Environment: ○​ 1.1. Orchestration Platform (OpenConnect): A platform built on Flyte for defining, executing, and managing all AI/ML workflows. Replaces the legacy ProML ecosystem. ○​ 1.2. Dependency Management: Utilizes Docker containers and resource manifests to decouple component dependencies, enabling rapid iteration and eliminating full-workflow rebuilds for minor changes. ○​ 1.3. Compute Environment: Multi-region, multi-cluster Kubernetes setup with a global scheduler for intelligent routing based on data locality and resource availability (CPU/GPU). ○​ 1.4. Distributed Training Frameworks: Primarily PyTorch Fully Sharded Data Parallel (FSDP) for large model training. Utilizes Horovod for certain distributed tasks. ○​ 1.5. Resilience: Employs active checkpointing and automated job retries via Flyte to handle node maintenance and infrastructure disruptions, reducing training failures by a reported 90%. ●​ B. Core Foundation Model Training (360Brew): ○​ 2.1. Base Model: Mixtral 8x22B, a decoder-only Mixture-of-Experts (MoE) Transformer architecture. ○​ 2.2. Training Stage 1: Continuous Pre-Training (CPT): Further pre-training of the base model on trillions of tokens of verbalized, first-party LinkedIn data (member profiles, interactions, Economic Graph data) to imbue it with domain-specific knowledge. Learn more about Trust Insights at TrustInsights.ai p. 57​
  • 59.
    ○​ 2.3. TrainingStage 2: Instruction Fine-Tuning (IFT): Fine-tuning on a blend of open-source and proprietary instruction datasets using preference alignment algorithms like DPO (Direct Preference Optimization) to enhance instruction-following and zero-shot reasoning capabilities. ○​ 2.4. Training Stage 3: Supervised Fine-Tuning (SFT): Fine-tuning on millions of labeled examples in a Multi-Turn Chat (MTC) format to learn specific ranking and recommendation tasks. The loss function is a weighted combination of prompt loss and masked MTC loss. ○​ 2.5. Final Artifact: A frozen, versioned 360Brew Foundation Model (150B+ parameters) is produced. ●​ C. Ancillary Model Training & Asset Generation: ○​ 3.1. Candidate Generation Model (Cross-Domain GNN): A Graph Neural Network trained on a unified, heterogeneous graph that consolidates data from multiple domains (Feed, Jobs, Notifications, Email). Produces a model capable of generating holistic, cross-domain member embeddings. ○​ 3.2. Efficient Model Generation (SLMs): ■​ 3.2.1. Knowledge Distillation: A process where the large 360Brew model (teacher) is used to train a smaller, more efficient model (student), often by minimizing the KL divergence between their output logits. ■​ 3.2.2. Structured Pruning: Utilizes algorithms like OSSCAR to perform one-shot or gradual pruning of MLP layers and attention heads, creating a smaller model (e.g., from 8B to 6.4B parameters) that is then fine-tuned via distillation to recover performance. ■​ 3.2.3. Final Artifact: Produces a set of versioned Small Language Models (SLMs) for deployment in highly latency-sensitive or cost-constrained scenarios. II. Real-Time Data Infrastructure This layer is responsible for capturing and serving the most recent member activity, which is essential for the In-Context Learning mechanism of the online systems. ●​ A. Event Streaming & Ingestion: ○​ 1.1. Event Bus: All client-side interactions (impressions, clicks, dwells, comments, shares) are published as events to a Kafka stream. ●​ B. Real-Time Data Storage & Serving: Learn more about Trust Insights at TrustInsights.ai p. 58​
  • 60.
    ○​ 2.1. In-MemoryDatastore: The Kafka stream is consumed by real-time processing systems that write the recent interaction data into low-latency, key-value stores like Pinot or Venice. ○​ 2.2. Function: This datastore serves as the source of truth for the "Past Interaction Data" used by the Real-Time Prompt Assembler during online inference. III. Online Serving Funnel (Real-Time Inference) This is the end-to-end, sub-second process executed for every feed request. ●​ A. L0: Candidate Generation: ○​ 1.1. Graph-Based Retrieval: The pre-trained Cross-Domain GNN model is used to score potential candidates based on their relationship to the member's holistic identity within the Economic Graph. ○​ 1.2. Similarity-Based Retrieval: Utilizes Approximate Nearest Neighbor (ANN) search algorithms (e.g., FAISS, ScaNN) on pre-computed content embeddings to find items semantically similar to a member's interests. ○​ 1.3. Heuristic-Based Retrieval: A set of fast, rule-based systems that pull in candidates based on signals like timeliness (e.g., content from direct connections posted in the last hour) and engagement velocity. ○​ 1.4. Aggregation: Candidates from all sources are collected, de-duplicated, and passed to the next stage. Output is a longlist of several thousand candidate item IDs. ●​ B. L2: Ranking & Reasoning: ○​ 2.1. Real-Time Prompt Assembler: A service that, for each candidate item, constructs a unique, verbose prompt. ■​ 2.1.1. Process: It fetches the member's profile text, the candidate post's text, and makes a live call to the Pinot/Venice datastore to retrieve the member's most recent interactions. ■​ 2.1.2. History Construction Logic: Implements programmatic rules to mitigate the "Lost-in-Distance" problem. This can include similarity-based curation (prioritizing historical items similar to the candidate) and recency/priority weighting to structure the prompt for optimal performance. ○​ 2.2. Inference Serving Engine: ■​ 2.2.1. Framework: vLLM, an open-source serving framework for high-throughput LLM inference. Learn more about Trust Insights at TrustInsights.ai p. 59​
  • 61.
    ■​ 2.2.2. CoreTechnology: Utilizes PagedAttention to efficiently manage GPU memory and enable high-concurrency request batching. ■​ 2.2.3. Execution: The pre-compiled, frozen 360Brew or SLM model is loaded into the vLLM engine. The engine receives batches of dynamically generated prompts and performs inference. ■​ 2.2.4. Optimization: Employs Tensor Parallelism to distribute the model across multiple GPUs and uses FP8 quantization to increase inference speed and reduce memory footprint on compatible hardware (e.g., H100 GPUs). ○​ 2.3. Output: A list of candidate items, each with a high-precision relevance score from the foundation model. ●​ C. Finalization, Delivery & Feedback: ○​ 3.1. Business Rule Filters: A final post-processing step that applies non-relevance-based rules: ■​ Trust & Safety: Content moderation filters. ■​ Impression Discounting: Down-ranks or removes content the member has already seen. ■​ Frequency Capping: Applies rules to prevent author or topic saturation in the feed. ○​ 3.2. Diversity Modeling (Setwise Ranking): An optional layer that can re-rank the top-N candidates by evaluating them as a collective "set" to optimize for session-level diversity and coherence, replacing legacy rule-based diversity re-rankers. ○​ 3.3. Delivery: The final, ordered list of item IDs is sent to a Render Service which formats the content for the specific client (Web, iOS, Android) before delivering the final payload. ○​ 3.4. Feedback Loop: All final impressions and subsequent user interactions are logged and streamed back via Kafka to both the real-time and offline data systems, closing the loop for the next cycle of ICL and model training. Learn more about Trust Insights at TrustInsights.ai p. 60​
  • 62.
    Advertisements In case youmissed the ads up front. Ready to transform your AI marketing approach? Schedule a complimentary consultation to discuss how Trust Insights can help you apply these frameworks to your unique challenges and opportunities. Our team of experts will provide personalized guidance based on your current AI maturity, strategic priorities, and organizational context. Contact Trust Insights Today for your free consultation. Done By You services from Trust Insights: -​ Almost Timeless: 48 Foundation Principles of Generative AI: A non-technical AI book by cofounder and Chief Data Scientist Christopher Penn, Almost Timeless teaches you how to think about AI and apply it to your organization. Done With You services from Trust Insights: -​ AI-Ready Strategist: Ideal for CMOs and C-Suite leaders, the AI-Ready Strategist teaches you frameworks and methods for developing, deploying, and managing AI at any scale, from the smallest NGO to the largest enterprises with an emphasis on people, process, and governance. -​ Generative AI Use Cases for Marketers course: Learn the 7 major use case categories for generative AI in marketing with 21 different hands-on exercises, all data and prompts provided. -​ Mastering Prompt Engineering for Marketing course: Learn the foundation skills you need to succeed with generative AI including 3 major prompt frameworks, advanced prompting techniques, and how to choose different kinds of prompts based on the task and tool Done For You services from Trust Insights: -​ Customized consulting: If you love the promise of analytics, data science, and AI but don’t love the huge amount of work that goes into fulfilling that promise, from data governance to agentic AI deployment, let us do it for you. We’ve got more than a decade of real-world AI implementation (AI existed long before ChatGPT) built on your foundational data so you can reap the benefits of AI while your competitors are still figuring out how to prompt. Learn more about Trust Insights at TrustInsights.ai p. 61​
  • 63.
    -​ Keynote talksand workshops: Bring Trust Insights to your event! We offer customized keynotes and workshops for conferences, company retreats, executive leadership meetings, annual meetings, and roundtables. Every full-fee talk is customized to your event, industry, or company, and you get the talk recording and materials (transcripts, prompts, data) for your audience to work with and learn from. Learn more about Trust Insights at TrustInsights.ai p. 62​
  • 64.
    About TrustInsights.ai Trust Insightsis a management consulting firm specializing in helping you turn data into results you care about. Whether it’s traditional analytics and data science or the latest innovations in machine learning and artificial intelligence, Trust Insights helps you achieve practical, beneficial outcomes instead of playing buzzword bingo. With a variety of services from training and education to done-for-you AI deployments, help for any of your data and insights needs is just a tap away. ●​ Learn more about Trust Insights: https://www.trustinsights.ai ●​ Learn more about Trust Insights AI Services: https://www.trustinsights.ai/aiservices Learn more about Trust Insights at TrustInsights.ai p. 63​
  • 65.
    Methodology and Disclosures Sources OriginalSources (Pre-360Brew Era) 1.​ Borisyuk, F., Zhou, M., Song, Q., Zhu, S., Tiwana, B., Parameswaran, G., Dangi, S., Hertel, L., Xiao, Q. C., Hou, X., Ouyang, Y., Gupta, A., Singh, S., Liu, D., Cheng, H., Le, L., Hung, J., Keerthi, S., Wang, R., Zhang, F., Kothari, M., Zhu, C., Sun, D., Dai, Y., Luan, X., Zhu, S., Wang, Z., Daftary, N., Shen, Q., Jiang, C., Wei, H., Varshney, M., Ghoting, A., & Ghosh, S. (2024). LiRank: Industrial large scale ranking models at LinkedIn. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24). Association for Computing Machinery. https://doi.org/10.1145/3637528.3671561 (Also arXiv:2402.06859v2 [cs.LG]) 2.​ Borisyuk, F., Hertel, L., Parameswaran, G., Srivastava, G., Ramanujam, S., Ocejo, B., Du, P., Akterskii, A., Daftary, N., Tang, S., Sun, D., Xiao, C., Nathani, D., Kothari, M., Dai, Y., & Gupta, A. (2025). From features to transformers: Redefining ranking for scalable impact. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '25)[Anticipated publication ]. (Also arXiv:2502.03417v1[cs.LG]) 3.​ Borisyuk, F., He, S., Ouyang, Y., Ramezani, M., Du, P., Hou, X., Jiang, C., Pasumarthy, N., Bannur, P., Tiwana, B., Liu, P., Dangi, S., Sun, D., Pei, Z., Shi, X., Zhu, S., Shen, Q., Lee, K.-H., Stein, D., Li, B., Wei, H., Ghoting, A., & Ghosh, S. (2024). LiGNN: Graph neural networks at LinkedIn. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24). Association for Computing Machinery. https://doi.org/10.1145/3637528.3671566 4.​ Zhang, F., Kothari, M., & Tiwana, B. (2024, August 7). Leveraging Dwell Time to Improve Member Experiences on the LinkedIn Feed. LinkedIn Engineering Blog. Retrieved from https://www.linkedin.com/blog/engineering/feed/leveraging-dwell-time-to-improv e-member-experiences-on-the-linkedin-feed (Original post: Dangi, S., Jia, J., Somaiya, M., & Xuan, Y. (2020, October 29). Understanding dwell time to improve LinkedIn feed ranking. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2020/understanding-feed-dwell-time) 5.​ Ackerman, I., & Kataria, S. (2021, August 19). Homepage feed multi-task learning using TensorFlow. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2021/homepage-feed-multi-task-learning-u Learn more about Trust Insights at TrustInsights.ai p. 64​
  • 66.
    sing-tensorflow (Also: https://www.linkedin.com/blog/engineering/feed/homepage-feed-multi-task-lear ning-using-tensorflow) 6.​ Zhu,J. (S.), Ghoting, A., Tiwana, B., & Varshney, M. (2023, May 2). Enhancing homepage feed relevance by harnessing the power of large corpus sparse ID embeddings. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2023/enhancing-homepage-feed-relevance -by-harnessing-the-power-of-lar 7.​ Mohamed, A., & Li, Z. (2019, June 27). Community-focused Feed optimization. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2019/06/community-focused-feed-optimiza tion 8.​ Ouyang, Y., Gupta, V., Basu, K., Diciccio, C., Gavin, B., & Guo, L. (2020, August 27). Using Bayesian optimization for balancing metrics in recommendation systems. LinkedIn Engineering Blog. Retrieved from https://www.linkedin.com/blog/engineering/recommendations/using-bayesian-op timization-for-balancing-metrics-in-recommendat 9.​ Ghike, S., & Gupta, S. (2016, March 3). FollowFeed: LinkedIn's Feed Made Faster and Smarter. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2016/03/followfeed-linkedin-s-feed-made-f aster-and-smarter 10.​Gupta, R., Ovsankin, S., Li, Q., Lee, S., Le, B., & Khanal, S. (2022, April 26). Near real-time features for near real-time personalization. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2022/near-real-time-features-for-near-real -time-personalization 11.​ GV, F. (2022, September 28). Open Sourcing Venice: LinkedIn's Derived Data Platform. LinkedIn Engineering Blog. Retrieved from https://engineering.linkedin.com/blog/2022/open-sourcing-venice-linkedin-s-deri ved-data-platform 12.​Hosni, Y. (2022, November). How LinkedIn Uses Machine Learning To Rank Your Feed. KDnuggets. Retrieved from https://www.kdnuggets.com/2022/11/linkedin-uses-machine-learning-rank-feed.h tml 13.​Jurka, T., Ghosh, S., & Davies, P. (2018, March 15). A Look Behind the AI that Powers LinkedIn's Feed: Sifting through Billions of Conversations to Create Personalized News Feeds for Hundreds of Millions of Members. LinkedIn Engineering Blog. Retrieved from Learn more about Trust Insights at TrustInsights.ai p. 65​
  • 67.
    https://engineering.linkedin.com/blog/2018/03/a-look-behind-the-ai-that-powers- linkedins-feed-sifting-through 14.​Yu, Y. Y.,& Saint-Jacques, G. (n.d.). Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn. [Unpublished manuscript/Preprint, contextually implied source]. 15.​Sanjabi, M., & Firooz, H. (2025, February 7). 360Brew : A Decoder-only Foundation Model for Personalized Ranking and Recommendation. arXiv. arXiv:2501.16450v3 [cs.IR] 16.​Firooz, H., Sanjabi, M., Jiang, W., & Zhai, X. (2025, January 2). LOST-IN-DISTANCE: IMPACT OF CONTEXTUAL PROXIMITY ON LLM PERFORMANCE IN GRAPH TASKS. arXiv. arXiv:2410.01985v2[cs.AI] New Sources (360Brew Era & Modern Ecosystem) 1.​ Sanjabi, M., Firooz, H., & 360Brew Team. (2025, August 23). 360Brew: A Decoder-only Foundation Model for Personalized Ranking and Recommendation. arXiv. arXiv:2501.16450v4 [cs.IR]. (Primary source for the 360Brew model, its prompt-based architecture, and In-Context Learning mechanism). 2.​ He, S., Choi, J., Li, T., Ding, Z., Du, P., Bannur, P., Liang, F., Borisyuk, F., Jaikumar, P., Xue, X., Gupta, V. (2025, June 15). Large Scalable Cross-Domain Graph Neural Networks for Personalized Notification at LinkedIn. arXiv. arXiv:2506.12700v1 [cs.LG]. (Primary source for the evolution from domain-specific GNNs to the holistic Cross-Domain GNN for candidate generation). 3.​ Firooz, H., Sanjabi, M., Jiang, W., & Zhai, X. (2025, January 2). LOST-IN-DISTANCE: IMPACT OF CONTEXTUAL PROXIMITY ON LLM PERFORMANCE IN GRAPH TASKS. arXiv. arXiv:2410.01985v2 [cs.AI]. (Primary source detailing the core technical challenge of long-context reasoning that informs the prompt engineering and history construction strategies for 360Brew). 4.​ Behdin, K., Dai, Y., Fatahibaarzi, A., Gupta, A., Song, Q., Tang, S., et al. (2025, February 20). Efficient AI in Practice: Training and Deployment of Efficient LLMs for Industry Applications. arXiv. arXiv:2502.14305v1 [cs.IR]. (Primary source for the practical deployment strategies, including knowledge distillation and pruning, used to create efficient SLMs from large foundation models like 360Brew for production use). 5.​ Lyu, L., Zhang, C., Shang, Y., Jha, S., Jain, H., Ahmad, U., & the OpenConnect Team. (2024). OpenConnect: LinkedIn’s next-generation AI pipeline ecosystem. LinkedIn Engineering Blog. (Contextually implied source describing the replacement of the legacy ProML platform with the modern OpenConnect/Flyte-based ecosystem for all AI/ML training pipelines). Learn more about Trust Insights at TrustInsights.ai p. 66​
  • 68.
    6.​ Kwon, W.,et al., & vLLM Team. (2024). How we leveraged vLLM to power our GenAI applications at LinkedIn. LinkedIn Engineering Blog. (Contextually implied source detailing the adoption of vLLM as the core inference serving engine for large-scale GenAI and foundation models at LinkedIn). We used Google's Gemini 2.5 Pro model to synthesize this guide from the data. The source data is approximately 325,000 words. Learn more about Trust Insights at TrustInsights.ai p. 67​