ASOpatents.com compiles a list of patents that are likely used to shape the algorithms of the Apple App Store, Google Play Store, and other major platforms. While it's uncertain whether these patents are actually implemented in the algorithms, the site provides insights into potential clues about search results, recommended apps, and other data points.

US10678832B2 & US20250252111A1 — From Synonym Clusters to AI Vectors: How Apple’s Search Learned to Understand Meaning

Two patents that trace Apple’s path from keyword matching to AI-powered semantic search

Patent 1: US10678832B2 — “Search Index Utilizing Clusters of Semantically Similar Phrases” Filed: September 29, 2017 · Granted: June 9, 2020 · Status: Expired Inventors: Saravana Kumar Siva Kumaran, Jiayan Gan, Mohamed Temraz, Santosh Shankar

Patent 2: US20250252111A1 — “Techniques for Providing Relevant Search Results for Search Queries” Filed: January 2025 · Published: August 7, 2025 · Status: Pending Inventors: Sean Suchter, Ramana M. Idury, Santosh Shankar, + 5 others

Shared inventor across both patents: Santosh Shankar


Important Scope Note: These Are Not Exclusively App Store Patents

Before diving in, a critical caveat. These patents were not filed specifically for App Store search. Santosh Shankar, the shared inventor, was hired into Apple’s Siri Search team — a broad organization that owns search infrastructure across Siri, Spotlight, App Store, Apple Music, Safari Suggestions, and other Apple surfaces. The 2017 patent describes a generic “document corpus” without naming the App Store. The 2025 patent uses music search as its primary example — “songs,” “song output vectors,” “song manager” — pointing more directly at Apple Music than the App Store.

However, there are strong reasons to believe the underlying technology applies to App Store search:

  • Apple’s own February 2025 research paper, “Scaling Search Relevance: Augmenting App Store Ranking with LLM-Generated Judgments,” explicitly confirms that LLM-based semantic ranking is deployed in the production App Store, with A/B tested conversion rate improvements.
  • Apple’s search infrastructure is shared across its digital stores. Technology built for one surface frequently propagates to others.
  • The 2017 patent’s generic architecture (phrase clustering mapped to documents) is store-agnostic by design — it works equally well for apps, songs, movies, or podcasts.

We cover these patents because they reveal the architectural direction of Apple’s search systems, including the App Store. But readers should note that the patents themselves don’t name the App Store, and specific implementation details may differ across Apple’s various search surfaces.


Why These Two Patents Belong Together

These patents are separated by eight years, but they solve the same problem: what happens when a user searches for “photo editor” but the best app’s listing says “image retouching tool”?

A pure keyword search misses the match entirely. The words are different even though the meaning is identical.

The 2017 patent solves this by pre-grouping synonyms into clusters: “photo editor,” “image editor,” “picture editing,” and “photo retouching” all go into one cluster. Search for any of them, you find documents containing any of them.

The 2025 patent replaces clusters with something far more powerful: a transformer-based large language model (LLM) that converts every query and every piece of content into a mathematical vector. Instead of matching words or clusters, it measures the distance between meanings in a continuous vector space — with personalization baked in.

Together, these patents tell the story of how Apple’s search evolved from “these words mean the same thing” to “this is what you personally mean.”


2017: The Cluster Approach — Grouping Synonyms Before Anyone Searches

The Problem It Solves

Traditional search indexes work like dictionaries: look up a word, get a list of documents containing that word. If you search “photo editor,” you only find apps that literally contain the phrase “photo editor” in their metadata. Apps described as “image retouching tool” or “picture manipulation software” are invisible — even though they do exactly what you want.

The older Query Expansion patent (US9280595B2) partially solved this by identifying related indicator terms. But it still required a lookup per query at search time. The 2017 patent takes a fundamentally different approach: reorganize the entire search index in advance so that synonyms are already grouped together.

How It Works in Plain English

Step 1: Extract every meaningful phrase from every app listing. The system reads through the entire document corpus (all app metadata in the store) and identifies phrases — single words, pairs of words, longer expressions.

Step 2: Convert each phrase into a vector using a neural network. This is where machine learning enters. A neural network processes each phrase and outputs a vector — a list of numbers that represents the phrase’s meaning in mathematical space. Phrases with similar meanings produce vectors that are close together. “Photo editor” and “image editor” end up near each other. “Photo editor” and “pizza delivery” end up far apart.

Step 3: Group nearby vectors into clusters. The system measures the distance between every phrase’s vector and groups the closest ones together. Each cluster gets a unique ID. So Cluster #4817 might contain: “photo editor,” “image editor,” “picture editing tool,” “photo retouching app,” “image manipulation software.”

The patent specifies that similarity is measured using cosine similarity — the angle between two vectors. An angle of 0° means identical meaning. An angle of 90° means completely unrelated.

Step 4: Also find synonyms through behavioral data. Beyond vector similarity, the system uses click graphs (which apps users clicked after which searches), web graphs (link structures between pages), and edit distances (how many character changes separate two phrases). This catches synonyms that a neural network alone might miss — like brand names or slang terms.

Step 5: Build a new search index that maps cluster IDs to documents. Instead of the traditional index (phrase → documents), the new index maps (cluster ID → documents). A document enters a cluster’s index if it contains any phrase from that cluster.

Step 6: Build a lookup file that maps individual phrases to their cluster IDs. This is stored as a memory-mapped file (MMF) for instant access — sorted alphabetically, compressed, directly readable from disk without loading into RAM.

What Happens When You Search

When you type “photo editor”:

  1. The system finds “photo editor” in the lookup file and retrieves its cluster IDs — say, Cluster #4817 and Cluster #2091.
  2. It looks up Cluster #4817 in the search index and gets all documents containing any phrase in that cluster — including apps that say “image retouching tool” but never use the word “photo.”
  3. It looks up Cluster #2091 and gets another set of documents.
  4. It intersects the results and passes them to a relevance engine for final ranking.

The critical advantage: the synonym expansion happens during indexing, not during searching. When your query arrives, there’s no neural network running in real time. It’s just two fast lookups. This is why the patent emphasizes reduced latency and lower processing costs compared to traditional query expansion.

Multi-Language Support

The lookup file is segmented by language. English phrases are in one block, French in another, Japanese in another. Each language has its own sorted set of phrases mapped to cluster IDs. This means the clustering works per-language — “photo editor” (English) and “éditeur de photos” (French) would be in separate language blocks but could share cluster IDs if the same concept appears across languages.


2025: The LLM Vector Approach — Understanding Meaning and Personalizing Results

The Leap Forward

The 2017 patent groups synonyms into discrete buckets. A phrase either belongs to a cluster or it doesn’t. The 2025 patent replaces this binary approach with a continuous vector space where every query and every piece of content exists as a point, and relevance is a distance measurement between points.

More importantly, the 2025 patent adds something the 2017 patent completely lacks: personalization through user account vectors.

How It Works in Plain English

The query side: When you type a search query, a transformer-based LLM processes the text and generates a query text vector — a mathematical representation of what you’re looking for. But it also generates a query statistics vector based on metadata about the query itself: how frequently this query is searched (weekly, monthly, yearly), its historical conversion rates across different time windows, and other statistical attributes.

The user side: Separately, the system builds a user account vector from your personal history — your favorites, library items, recently accessed content, and past search queries. This vector represents your taste and context as a point in the same mathematical space.

Combining the vectors: The query vector and user account vector are passed through fully connected neural network layers and normalization layers to produce a single combined output vector that represents “what this specific user means by this specific query.”

The content side: Every piece of content in the store also has a pre-computed output vector generated by similar processing — analyzing metadata, attributes, and behavioral signals associated with that content.

Ranking by similarity: The system calculates the similarity between the combined user+query vector and each content output vector. Higher similarity = higher ranking. The patent lists multiple distance algorithms: Cosine Similarity, Euclidean Distance, Manhattan Distance, Jaccard Similarity, and others.

What Makes This Different from 2017

Aspect2017 Cluster Patent2025 LLM Vector Patent
How meaning is representedDiscrete clusters (in or out)Continuous vector space (degrees of similarity)
PersonalizationNone — same clusters for all usersUser account vector personalizes every query
Synonym handlingPre-computed groupsEmergent from vector proximity — no explicit synonym lists
Cross-concept matchingOnly within pre-defined clustersCan match concepts that were never explicitly grouped
Real-time adaptabilityStatic until re-indexedUser vector updates with each interaction
Underlying technologyNeural network embeddings + clusteringTransformer-based LLMs + fully connected layers
Query statisticsNot usedConversion rates, frequency, temporal patterns feed the vector

The Shared Inventor

Santosh Shankar appears on both patents. In 2017, he helped build the cluster-based semantic index. In 2025, he’s part of the team that replaced it with LLM-powered vector search. This continuity suggests the 2025 system was designed by people who understood exactly what the 2017 system could and couldn’t do — and built the successor to fix its limitations.


Real-World Confirmation: Apple’s 2025 Research Paper

In February 2025, Apple’s machine learning team published a research paper titled “Scaling Search Relevance: Augmenting App Store Ranking with LLM-Generated Judgments.” This paper confirms that Apple’s production App Store ranker uses two relevance objectives:

Behavioral relevance — ranking based on what users click and download (the approach from the older patents).

Textual relevance — ranking based on semantic fit between the query and app metadata (the approach from these two patents).

The paper revealed that Apple tested LLM-generated relevance judgments against human annotations and found them sufficiently aligned to use as training data for the production ranking model. A/B testing showed a +0.24% worldwide conversion rate increase, with the largest gains on tail queries — exactly the long-tail, unusual searches where traditional keyword matching fails and semantic understanding matters most.

This confirms the 2025 patent isn’t theoretical. Apple is actively deploying LLM-based semantic ranking in production.


What This Means for ASO

The shift from keywords to concepts is not future — it’s happening now. Apple’s own research confirms LLM-based semantic ranking is live in the App Store. Your metadata strategy needs to optimize for conceptual relevance, not just keyword density.

Exact keyword matches become less important over time. In the 2017 cluster system, your app had to contain at least one phrase from the right cluster. In the 2025 vector system, your app just needs to be “close enough” in vector space. An app described as “create vector illustrations and animations” could match a query for “graphic design software” even without sharing a single word — because the LLM understands they describe similar things.

Tail queries are where this matters most. Apple’s own A/B test showed the biggest gains on tail queries — uncommon, specific searches that traditional keyword matching handles poorly. If your app serves a niche function, semantic search is your friend. Describe what your app does in natural, varied language rather than repeating a few keywords.

Personalization means different users see different results. The 2025 patent’s user account vector means that two people searching the same query may get different rankings. A professional designer searching “design tool” might see different top results than a student searching the same term — because their download history, favorites, and past queries differ. ASO can’t fully control this, but it means your app’s long-term relationship with its users (retention, repeat engagement, library additions) feeds back into how visible you are to similar users.

Natural language descriptions beat keyword stuffing. LLMs are trained on natural language. They understand natural sentences better than keyword-stuffed metadata. “Create stunning vector illustrations, design logos, and animate your artwork” gives the LLM more semantic signal than “vector design graphic design logo design illustration design tool.”

Multi-language semantic matching is improving. The 2017 patent already segmented by language with shared cluster IDs. The 2025 LLM approach can potentially match across languages at the vector level — a Japanese user’s query could semantically match an English app description if the LLM is multilingual. This has major implications for localization strategy.


Technical Details

Patent 1: Cluster-Based Semantic Index

FieldValue
Patent NumberUS10678832B2 (published as US20190102400A1)
FiledSeptember 29, 2017
GrantedJune 9, 2020
StatusExpired — Fee Related
AssigneeApple Inc.
Key ClassificationG06F 16/355 (Clustering — creation/modification of classes)

Patent 2: LLM Vector-Based Search

FieldValue
Publication NumberUS20250252111A1
FiledJanuary 2025
PublishedAugust 7, 2025
StatusPending
AssigneeApple Inc.
Key ClassificationG06F 16/24578 (Ranking with user adaptation)
Notable InventorsSean Suchter (Apple search engineering), Santosh Shankar (also on 2017 patent)

How This Connects to the Other App Store Patents

These two patents represent the third generation of Apple’s approach to understanding search queries:

Generation 1 (2012): Query Expansion (US9280595B2) — identify indicator terms that co-occur with the search keyword, use them to re-rank results. Requires per-query lookup at search time.

Generation 2 (2017): Semantic Clustering (US10678832B2) — pre-group synonymous phrases into clusters, build the search index around clusters instead of individual phrases. Single lookup, no real-time neural network needed.

Generation 3 (2025): LLM Vector Search (US20250252111A1) — convert everything (queries, user profiles, content) into vectors using transformer LLMs, rank by vector similarity. Personalized, continuous, and capable of understanding meaning that was never explicitly programmed.

Each generation doesn’t necessarily replace the previous one — they likely coexist, with different approaches used for different query types, different markets, or different stages of the ranking pipeline. The Query Classifier patent (US9405832B2) may still route queries to different ranking strategies, some of which use the older co-occurrence approach and others the newer vector approach.


Patent sources: US10678832B2 and US20250252111A1 via Google Patents

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir