Apple’s patent starts from a simple observation: users search for apps with fundamentally different intentions. Someone typing “Angry Birds” has a completely different goal than someone typing “strategy game.” Treating both queries the same way — running the same keyword-matching algorithm against the same database with the same ranking weights — produces poor results for at least one of those users.
Apple’s solution is a classification engine that sits between the user’s query and the search algorithm. Before any ranking happens, the system first asks: what kind of search is this?
The patent defines three primary query categories:
Navigational — The user is looking for a specific, known app. They already know what they want. Examples: “Angry Birds,” “Spotify,” “WhatsApp Pro.”
Functional — The user is looking for any app that performs a specific task. They don’t have a particular app in mind. Examples: “constellation identification,” “to do list,” “restaurant guides.”
Browse — The user wants to explore a topic area without a specific app or function in mind. Examples: “best apps,” “free games,” “top music.”
Each of these categories triggers a different search technique with different ranking weights. This is the most important thing the patent reveals: the ranking algorithm changes based on what Apple thinks you’re looking for.
How Apple Classifies a Query
The classification engine uses three independent analysis layers, and the patent describes each in detail.
Layer 1: Term Analysis
The system examines the words in the query itself for structural signals:
- Capitalization patterns. Queries with capitalized words after the first word are more likely navigational. “Angry Birds” signals a specific app name. “angry birds” might still be navigational, but the capitalization provides a stronger signal.
- Versioning words. Terms like “pro,” “lite,” “light,” “premium,” “1,” “2,” “3,” “seasons,” or “holiday” are strong navigational indicators. These words typically appear in app names, not in descriptions of what apps do.
- Browse trigger words. Words like “best,” “top,” “hot,” “free,” “apps,” “games,” “music,” and “video” trigger the browse classification. Users aren’t looking for a specific app — they’re exploring.
- Functional language patterns. Phrases like “that can,” “how to,” or terms containing “information” tend to signal functional queries. The patent also lists examples like “fm radio” and “to do list” as functional terms.
- Word count. Queries with more words tend to skew functional. A user typing a longer, more descriptive phrase is more likely describing what they need an app to do.
Layer 2: Preliminary Search Analysis
Before finalizing the classification, the system runs a quick preliminary search and analyzes the results:
- Number of search results. A query that returns very few results with an exact title match is more likely navigational (the user typed the app’s name). A query returning many results is more likely functional or browse.
- Exact title matches. If the query terms exactly match several app titles, the system weights this information in the classification decision.
Layer 3: Empirical Behavioral Data
This is the most sophisticated layer. Apple continuously collects data about what users actually do after searching:
- Download distribution analysis. For every search term, Apple tracks which apps users download after viewing search results. The distribution shape reveals query intent.
The patent provides a concrete example with actual data. For the query “BBC”:
- 1,104 unique apps were downloaded by users after searching.
- BBC News alone accounted for 50% of all downloads.
- The cumulative download curve flattens quickly — after just the top app, most of the download activity is captured.
- Classification: Navigational. Half of all users wanted one specific app.
For the query “News”:
- 2,212 unique apps were downloaded.
- Fox News, the most downloaded, accounted for only 12%.
- BBC News got 7%, ABC News 6%, Flipboard 4%.
- The cumulative curve climbs slowly and doesn’t reach 50% even after the top four apps.
- Classification: Functional. Users downloaded a wide variety of apps — they were looking for any good news app, not a specific one.
The system uses these distribution patterns to assign category scores. The patent describes how a query might be associated with a score vector like [0.05, 0.85, 0.45] for navigational, functional, and browse respectively — meaning the system doesn’t make a binary choice but assigns weighted probabilities.
Machine Learning Integration
The patent explicitly states that the classification can use a learning algorithm such as a support vector machine or neural network. The learning algorithm takes search-query terms as inputs, accesses hidden variables (cumulative buy rates, average user ratings, download distributions), and outputs a category assignment. It dynamically adjusts its weights based on user satisfaction signals, including where in the results list a user’s selected app appeared.
How Each Category Gets Searched Differently
Once a query is classified, the system selects a search technique. This is where the patent’s implications for ASO become concrete.
Navigational Search Strategy
When the system classifies a query as navigational, the ranking algorithm emphasizes:
- Exact text matching in app titles. The system counts how many times search terms appear in an app’s title and metadata, with title matches weighted heavily.
- Full-string matching. Rather than breaking “Angry Birds” into “angry” and “birds” and matching them independently, the system may search for the full phrase as a unit.
- Reduced emphasis on popularity. If the user already knows which app they want, download counts and ratings matter less than finding the right title match.
The patent describes this as a weighting example: a text-based search might carry a weight of 0.8 for navigational queries, with other techniques (like user rankings or synonym detection) carrying only 0.2.
ASO implication: For branded keywords and competitor keywords, exact title matching dominates. If someone searches your app’s name, your title is the primary ranking factor. This is why branded campaigns in Apple Search Ads have such high conversion rates — the algorithm is already biased toward giving users exactly what they typed.
Functional Search Strategy
For functional queries, the algorithm shifts its emphasis:
- Metadata text similarity. The system searches not just for exact keyword matches but for apps with metadata having “similar meaning to the search terms.” The patent gives the example of searching for “strategy game” and also matching “strategic game,” “tactical puzzle,” or “planning game.”
- User popularity signals. Apps with high download frequencies, high download counts, and high user ratings are preferentially ranked. Popularity matters much more for functional queries because there’s no single “right” answer.
- Broader matching. The system doesn’t require exact matches — it looks for semantic equivalents and related terms.
The weighting might shift to 0.4 for text matching and 0.6 for other techniques (popularity, ratings, synonyms).
ASO implication: For generic, functional keywords — the ones that drive discovery — download velocity, ratings, and metadata breadth all matter significantly more than they do for navigational queries. Simply stuffing your keyword field with exact-match terms isn’t enough. Apple’s algorithm considers whether your app is popular among people searching for that function.
Browse Search Strategy
For browse queries, the patent indicates that:
- User ratings become particularly advantageous. Browse queries represent the most uncertain user intent, so the system relies heavily on social proof signals.
- Popularity-based ranking dominates. The system surfaces apps that other users have found valuable.
- Category alignment matters more, since browse queries are often category-level.
ASO implication: For high-volume category terms like “free games” or “best apps,” your ratings, review count, and download velocity are the primary ranking levers. No amount of keyword optimization will overcome low ratings for browse-type queries.
The Search-Term Look-Up Table (LUT)
One of the patent’s most revealing technical details is the Search-Term Look-Up Table. Apple maintains a database that associates every frequently searched term with:
- A direct category assignment or category score vector
- Query frequency — how often the term is searched
- Number of search results returned for the term
- Click-through rate (CTR) — broken into “top CTR” (fraction of users clicking the #1 result), “cumulative CTR” (fraction clicking any top-n result), and “exact-term CTR” (fraction clicking an app whose name exactly matches the query)
- Buy/download rate (BR) — similarly broken into top, cumulative, and exact-term variants
- Average and maximum user ratings for apps associated with the term
The LUT is continuously updated. Older data gets reduced weight while newer data gets higher weight. This means classifications can shift over time as user behavior changes.
ASO implication: The App Store’s understanding of your keywords isn’t static. A term that was “functional” last year might become “navigational” if a dominant app emerges for that term. Conversely, a navigational term could become functional if the dominant app loses market share. Monitoring download distribution changes for your target keywords matters.
The Feedback Loop
The patent describes a complete feedback loop:
- User enters a search query.
- The system classifies the query and returns category-appropriate results.
- The user interacts with results — viewing app detail pages, downloading apps, or abandoning the search.
- All interactions are logged and associated with the search query in the empirical query database.
- The classification model updates based on this new behavioral data.
This means every App Store search and every subsequent user action feeds back into the classification system. The algorithm learns from aggregate user behavior, continuously refining its understanding of what each query means.
ASO implication: User behavior after seeing your app in search results directly affects how the algorithm classifies the queries you rank for. If users consistently download your app after searching a particular term, you’re reinforcing that term’s navigational character (if they mostly download your app) or functional character (if they download many different apps including yours).
What About Scoring and Ranking?
The patent describes how ranking scores are generated:
- A base ranking score is computed from comparing app characteristics to search-query terms. This includes counting keyword appearances in titles and metadata.
- The score is then adjusted by popularity factors — download frequency, download count, and user ratings.
- The weighting between text matching and popularity depends on the assigned query category.
Information presented for each search result can include: app name, publisher, price, category, brief description, number of downloads, average rating, device compatibility, size, version, update date, languages, screenshots, and user reviews. The system decides which information to show based on the search context and user settings.
Additional Category Types
Beyond the three primary categories, the patent mentions two additional classification types:
- Location-based — queries with geographical intent
- Price-based — queries focused on pricing (likely triggered by terms like “free” or price-related language)
These suggest Apple applies even more granular ranking strategies depending on the full context of the query, though the patent focuses primarily on the navigational/functional/browse framework.
Practical ASO Takeaways
1. Map your keywords by query type. Not all keywords should be optimized the same way. Identify which of your target keywords are navigational, functional, or browse, and adjust your strategy accordingly.
2. For navigational defense (branded terms): Your app title is everything. Exact title matching is the dominant signal. Ensure your brand name is clear, unambiguous, and prominent. Branded Apple Search Ads campaigns are critical for defense because competitors’ title matches won’t outperform yours.
3. For functional keywords (your growth engine): Metadata breadth, download velocity, and ratings matter far more than for navigational terms. Use your keyword field, subtitle, and description to cover semantic variations. Drive download velocity through paid acquisition to boost your organic ranking on these terms.
4. For browse terms (high volume, high competition): Ratings and social proof dominate. If your rating is below 4.5, improving it will likely do more for browse-term rankings than any metadata change. Download volume is the other major lever.
5. Watch for classification shifts. If a competitor launches a dominant app on a functional keyword you target, that term could shift toward navigational — meaning your ranking strategy needs to change. Monitor category-level download concentration for your target keywords.
6. The empirical data updates continuously. Apple weights recent behavioral data more heavily than old data. Seasonal campaigns, new competitor launches, and changes in user behavior all affect how the algorithm classifies queries and ranks apps. ASO is not a set-it-and-forget-it exercise.
Technical Details
Patent Number: US9405832B2 Application Number: US13/566,800 Priority Date: May 31, 2012 Filing Date: August 3, 2012 Publication Date (Application): December 5, 2013 Grant Date: August 2, 2016 Status: Expired (Fee Related) Adjusted Expiration: April 17, 2035 PCT: WO2013/180867A1 (filed April 26, 2013) Classification: G06F 16/9538 (Presentation of query results); G06F 16/951 (Indexing; Web crawling techniques)
Cited Prior Art Keywords: search, query, software applications, user, category
Important Caveats
This patent was filed in 2012 and granted in 2016. Apple’s actual App Store search algorithm has certainly evolved since then — the company’s 2025 LLM-based ranking patent (US20250252111A1) and its published research on using large language models for search relevance judgments confirm that the system has become significantly more sophisticated. However, this patent establishes the foundational architecture: query intent classification driving differentiated ranking strategies. There is no reason to believe Apple has abandoned this approach — the evidence suggests they’ve built increasingly powerful systems on top of it.
The patent describes what Apple could implement, not necessarily exactly what is deployed. Specific weighting numbers, threshold values, and implementation details may differ from production systems. But the structural insight — that different query types receive different algorithmic treatment — is a design principle, not just a technical detail.
Patent source: US20130325892A1 / US9405832B2 via Google Patents
Bir yanıt yazın