IN at the moment For search engine promotion, factors such as content and structure play the most important role. However, how to understand what to write text about, what sections and pages to create on the site? In addition to this, you need to find out exactly what the target visitor to your resource is interested in. To answer all these questions you need to collect semantic core.

Semantic core— a list of words or phrases that fully reflect the theme of your site.

In the article I will tell you how to pick it up, clean it and break it down into structure. The result will be a complete structure with queries clustered across pages.

Here is an example of a query core broken down into a structure:


By clustering I mean breaking your search queries down into separate pages. This method will be relevant for both promotion in Yandex and Google PS. In the article I will describe completely free way creating a semantic core, however, I will also show options with various paid services.

After reading the article, you will learn

  • Choose the right queries for your topic
  • Collect the most complete core of phrases
  • Clean up uninteresting requests
  • Group and create structure

Having collected the semantic core you can

  • Create a meaningful structure on the site
  • Create a multi-level menu
  • Fill pages with texts and write meta descriptions and titles on them
  • Collect positions of your website for queries from search engines

Collection and clustering of the semantic core

Correct compilation for Google and Yandex begins with identifying the main key phrases of your topic. As an example, I will demonstrate its composition using a fictitious online clothing store. There are three ways to collect the semantic core:

  1. Manual. Using the Yandex Wordstat service, you enter your keywords and manually select the phrases you need. This method is quite fast if you need to collect keys on one page, however, there are two disadvantages.
    • The accuracy of the method is poor. You may always miss some important words if you use this method.
    • You will not be able to assemble a semantic core for a large online store, although you can use the Yandex Wordstat Assistant plugin to simplify it - this will not solve the problem.
  2. Semi-automatic. In this method, I assume using a program to collect the kernel and further manually breaking it down into sections, subsections, pages, etc. This method of compiling and clustering the semantic core, in my opinion, is the most effective because has a number of advantages:
    • Maximum coverage of all topics.
    • Qualitative breakdown
  3. Auto. Nowadays, there are several services that offer fully automatic kernel collection or clustering of your requests. Fully automatic option - I do not recommend using it, because... The quality of collection and clustering of the semantic core is currently quite low. Automatic query clustering is gaining popularity and has its place, but you still need to merge some pages manually, because the system does not give an ideal ready-made solution. And in my opinion, you will simply get confused and will not be able to immerse yourself in the project.

To compile and cluster a full-fledged correct semantic core for any project, in 90% of cases I use a semi-automatic method.

So, in order we need to follow these steps:

  1. Selection of queries for topics
  2. Collecting the kernel based on requests
  3. Cleaning up non-target requests
  4. Clustering (breaking phrases into structure)

I showed an example of selecting a semantic core and grouping into a structure above. Let me remind you that we have an online clothing store, so let’s start looking at point 1.

1. Selection of phrases for your topic

At this stage we will need the Yandex Wordstat tool, your competitors and logic. In this step, it is important to collect a list of phrases that are thematic high-frequency queries.

How to select queries to collect semantics from Yandex Wordstat

Go to the service, select the city(s)/region(s) you need, enter the most “fatty” queries in your opinion and look at the right column. There you will find what you need thematic words, both to other sections, and frequency synonyms for the entered phrase.

How to select queries before compiling a semantic core using competitors

Enter the most popular queries into the search engine and select one of the most popular sites, many of which you most likely already know.

Pay attention to the main sections and save the phrases you need.

At this stage, it is important to do the right thing: to cover as much as possible all possible words from your topic and not miss anything, then your semantic core will be as complete as possible.

Applying to our example, we need to create a list of the following phrases/keywords:

  • Cloth
  • Shoes
  • Boots
  • Dresses
  • T-shirts
  • Underwear
  • Shorts

What phrases are pointless to enter?: women's clothing, buy shoes, prom dress, etc. Why?— These phrases are the “tails” of the queries “clothes”, “shoes”, “dresses” and will be added to the semantic core automatically at the 2nd stage of collection. Those. you can add them, but it will be pointless double work.

What keys do I need to enter?“low boots”, “boots” are not the same thing as “boots”. It is the word form that is important, not whether these words have the same root or not.

For some, the list of key phrases will be long, but for others it consists of one word - don’t be alarmed. For example, for an online store of doors, the word “door” may well be enough to compile a semantic core.

And so, at the end of this step we should have a list like this.

2. Collecting queries for the semantic core

For proper, full collection, we need a program. I will show an example using two programs simultaneously:

  • On the paid version - KeyCollector. For those who have it or want to buy it.
  • Free - Slovoeb. A free program for those who are not ready to spend money.

Open the program

We create new project and let's call it, for example, Mysite

Now to further collect the semantic core, we need to do several things:

Create a new account on Yandex mail (the old one is not recommended due to the fact that it can be banned for many requests). So you created an account, for example [email protected] with password super2018. Now you need to specify this account in the settings as ivan.ivanov:super2018 and click the “save changes” button below. More details in the screenshots.

We select a region to compile a semantic core. You need to select only those regions in which you are going to promote and click save. The frequency of requests and whether they will be included in the collection in principle will depend on this.

All settings are completed, all that remains is to add our list of key phrases prepared in the first step and click the “start collecting” button of the semantic core.

The process is completely automatic and quite long. You can make coffee for now, but if the topic is broad, for example, like the one we are collecting, then this will last for several hours 😉

Once all the phrases are collected, you will see something like this:

And this stage is over - let's move on to the next step.

3. Cleaning the semantic core

First, we need to remove requests that are not interesting to us (non-target):

  • Related to another brand, for example, Gloria Jeans, Ekko
  • Information queries, for example, “I wear boots”, “jeans size”
  • Similar in topic, but not related to your business, for example, “used clothing”, “clothing wholesale”
  • Queries that are in no way related to the topic, for example, “Sims dresses”, “puss in boots” (there are quite a lot of such queries after selecting the semantic core)
  • Requests from other regions, metro, districts, streets (it doesn’t matter which region you collected requests for - another region still comes across)

Cleaning must be done manually as follows:

We enter a word, press “Enter”, if in our created semantic core it finds exactly the phrases that we need, select what we found and press delete.

I recommend entering the word not as a whole, but using a construction without prepositions and endings, i.e. if we write the word “glory”, it will find the phrases “buy jeans at Gloria” and “buy jeans at Gloria”. If you spelled "gloria" - "gloria" would not be found.

Thus, you need to go through all the points and remove unnecessary queries from the semantic core. This may take a significant amount of time, and you may end up deleting most of the collected queries, but the result will be a complete, clean and correct list of all possible promoted queries for your site.

Now upload all your queries to excel

You can also remove non-target queries from semantics en masse, provided you have a list. This can be done using stop words, and this is easy to do for a typical group of words with cities, subways, and streets. You can download a list of words that I use at the bottom of the page.

4. Clustering of the semantic core

This is the most important and interesting part - we need to divide our requests into pages and sections, which together will create the structure of your site. A little theory - what to follow when separating requests:

  • Competitors. You can pay attention to how the semantic core of your competitors from the TOP is clustered and do the same, at least with the main sections. And also see which pages are in the search results for low-frequency queries. For example, if you are not sure whether or not to create a separate section for the query “red leather skirts,” then enter the phrase into the search engine and look at the results. If the search results contain resources with such sections, then it makes sense to make a separate page.
  • Logics. Do the entire grouping of the semantic core using logic: the structure should be clear and represent in your head a structured tree of pages with categories and subcategories.

And a couple more tips:

  • It is not recommended to place less than 3 requests per page.
  • Don’t make too many levels of nesting, try to have 3-4 of them (site.ru/category/subcategory/sub-subcategory)
  • Do not make long URLs, if you have many levels of nesting when clustering the semantic core, try to shorten the urls of categories high in the hierarchy, i.e. instead of “your-site.ru/zhenskaya-odezhda/palto-dlya-zhenshin/krasnoe-palto” do “your-site.ru/zhenshinam/palto/krasnoe”

Now to practice

Kernel clustering as an example

First, let's break down all requests into main categories. Looking at the logic of competitors, the main categories for a clothing store will be: men's clothing, women's clothing, children's clothing, as well as a bunch of other categories that are not tied to gender/age, such as simply “shoes”, “outerwear”.

We group the semantic core using Excel. Open our file and act:

  1. We break it down into main sections
  2. Take one section and break it into subsections

I will show you the example of one section - men's clothing and its subsection. In order to separate some keys from others, you need to select the entire sheet and click conditional formatting->cell selection rules->text contains

Now in the window that opens, write “husband” and press enter.

Now all our keys for men's clothing are highlighted. It is enough to use a filter to separate the selected keys from the rest of our collected semantic core.

So let’s turn on the filter: you need to select the column with queries and click sort and filter->filter

And now let's sort

Create a separate sheet. Cut the highlighted lines and paste them there. You will need to split the kernel in the future using this method.

Change the name of this sheet to “Men’s clothing”, a sheet where the rest of the semantic core is called “All queries”. Then create another sheet, call it “Structure” and put it as the very first one. On the structure page, create a tree. You should get something like this:

Now we need to divide the large men's clothing section into subsections and sub-subsections.

For ease of use and navigation through your clustered semantic core, provide links from the structure to the appropriate sheets. To do this, click right click mouse to the desired item in the structure and do as in the screenshot.

And now you need to methodically separate the requests manually, simultaneously deleting what you may not have been able to notice and delete at the kernel cleaning stage. Ultimately, thanks to clustering of the semantic core, you should end up with a structure similar to this:

So. What we learned to do:

  • Select the queries we need to collect the semantic core
  • Collect all possible phrases for these queries
  • Clean out "garbage"
  • Cluster and create structure

What, thanks to the creation of such a clustered semantic core, can you do next:

  • Create a structure on the site
  • Create a menu
  • Write texts, meta descriptions, titles
  • Collect positions to track dynamics of requests

Now a little about programs and services

Programs for collecting the semantic core

Here I will describe not only programs, but also plugins and online services which I use

  • Yandex Wordstat Assistant is a plugin that makes it convenient to select queries from Wordstat. Great for quickly compiling the core of a small site or 1 page.
  • Keycollector (word - free version) is a full-fledged program for clustering and creating a semantic core. It is very popular. Huge number functionality in addition to the main direction: Selection of keys from a bunch of other systems, the possibility of auto-clustering, collection of positions in Yandex and Google and much more.
  • Just-magic is a multifunctional online service for compiling a kernel, auto-breaking, checking the quality of texts and other functions. The service is shareware; to fully operate, you need to pay a subscription fee.

Thank you for reading the article. Thanks to this step-by-step manual, you will be able to create the semantic core of your website for promotion in Yandex and Google. If you have any questions, ask in the comments. Below are the bonuses.

Useful materials from the blog on collecting keys for semantics, query clustering and website page optimization.

Article topics:


Semantic core



A correctly composed semantic core can send only the right users, and the unsuccessful one is buried in the depths of the search results.

Working with queries included in the semantic core (SC) consists of collection, cleaning and clustering. Having received the grouping results, you need to determine the optimal place for each of them: on a resource page, as part of the content of your website or a third-party site.


How to collect keys for SY


Briefly about the important things: which operators to use in Wordstat to view the necessary queries, and how to make your work in the service easier.

Wordstat does not provide absolutely accurate information; it does not contain all queries; the data may be distorted because not all consumers use Yandex. However, from this data it is possible to draw conclusions about the popularity of a topic or product, approximately predict demand, collect keys and find ideas for new content.

You can search for data by simply entering a query into the search service, but to specify queries there are operators - additional symbols with clarifications. They work on the word and region search tabs; on the query history tab, you can only use the “+query” operator.

In the article:

  • Why do you need Wordstat?
  • Working with Wordstat operators
  • How to Read Wordstat Data
  • Extensions for Yandex Wordstat

We strive to become leaders in the search results: how analysis of articles from the top will help in working on content, what criteria to use for analysis and how to do it faster and more efficiently.

It is difficult to track the results of blogging and publishing other texts on the site without detailed analytics. How can you understand why your competitors’ articles are in the top, but yours are not, even though you write better and more talented?

In the article:

  • What is usually recommended?
  • How to analyze
  • Disadvantages of the approach
  • Benefits of Content Analysis
  • Tools

How to write optimized texts


Which content gets more links and social signals? Backlinko, in partnership with BuzzSumo, analyzed 912 million blog posts, looking at article length, headline format, social signals and article backlinks, and came up with recommendations for content marketing. We translated and adapted the study.

In the article:

  • Brief conclusions from the content study
  • New knowledge about content marketing, in detail:
  1. Which Content Gets More Links?
  2. Which texts are more popular on social networks?
  3. Backlinks are hard to get
  4. What materials get all the reposts?
  5. How is the number of backlinks and reposts related?
  6. Which headlines bring more shares?
  7. What day of the week is best to publish content?
  8. What content format is reposted most often?
  9. Which Content Format Gets More Links?
  10. How it generates links and reposts for B2B and B2C content

Many web publications and publications talk about the importance of the semantic core.

Similar texts are available on our website “What to Do.” At the same time, only the general theoretical part of the issue is often mentioned, while the practice remains unclear.

All experienced webmasters insist that it is necessary to create a basis for promotion, but only a few clearly explain how to use it in practice. To remove the veil of secrecy from this issue, we decided to highlight the practical side of using the semantic core.

Why do you need a semantic core?

This is, first of all, the basis and plan for further filling and promoting the site. The semantic basis, divided according to the structure of the web resource, are pointers on the way to the systematic and targeted development of the site.

If you have such a foundation, you don't have to think about the topic of each next article, you just need to follow the bullet points. With the core, website promotion moves much faster. And promotion gains clarity and transparency.

How to use the semantic core in practice

To begin with, it is worth understanding how the semantic basis is generally compiled. Essentially, this is a list of key phrases for your future project, supplemented by the frequency of each request.

Collecting such information is not difficult using the Yandex Wordstat service:

http://wordstat.yandex.ru/

or any other special service or program. The procedure will be as follows...

How to create a semantic core in practice

1. Collect in a single file (Exel, Notepad, Word) all queries on your key topic, taken from statistical data. This should also include phrases “out of the blue”, that is, logically acceptable phrases, morphological variants (as you yourself would search for your topic) and even variants with typos!

2. The list of semantic queries is sorted by frequency. From requests from maximum frequency– for queries with a minimum of popularity.

3. All junk queries that do not correspond to the theme or focus of your site are removed and cleared from the semantic basis. For example, if you tell people for free about washing machines, but you don’t sell them, you don’t need to use words like:

  • "buy"
  • "wholesale"
  • "delivery"
  • "order"
  • "cheap"
  • “video” (if there are no videos on the site)…

Meaning: do not mislead users! Otherwise, your site will receive a huge number of failures, which will affect its rankings. And this is important!

4. When the main list is cleared of unnecessary phrases and queries and includes a sufficient number of items, you can use the semantic core in practice.

IMPORTANT: a semantic list can never be considered completely ready and complete. In any topic, you will have to update and supplement the core with new phrases and queries, periodically monitoring innovations and changes.

IMPORTANT: the number of articles on the future site will depend on the number of items in the list. Consequently, this will affect the volume of required content, the working time of the author of the articles, and the duration of filling the resource.

Imposing a semantic core on the site structure

In order for the entire list received to make sense, you need to distribute requests (depending on frequency) across the site structure. It is difficult to give specific numbers here, since the scale and frequency difference can be quite significant for different projects.

If, for example, you take a query with a millionth frequency as a basis, even a phrase with 10,000 queries will seem mediocre.

On the other hand, when your main request is 10,000 frequency, the average frequency will be about 5,000 requests per month. Those. a certain relativity is taken into account:

“HF – CP – LF” or “Maximum – Middle – Minimum”

But in any case (even visually) you need to divide the entire core into 3 categories:

  1. high-frequency queries (HF - short phrases with maximum frequency);
  2. low-frequency queries (LF - rarely requested phrases and word combinations with low frequency);
  3. mid-frequency queries (MF) - all average queries that are in the middle of your list.

The next step is to support 1 or more (maximum 3) requests for the main page. These phrases should be as high frequency. High frequencies are placed on the main page!

Next, from the general logic of the semantic core, it is worth highlighting several main key phrases from which sections (categories) of the site will be created. Here you could also use high-frequency queries with a lower frequency than the main one, or better - mid-frequency queries.

Low-frequency remaining phrases are sorted into categories (under created sections and categories) and turned into topics for future publications on the site. But it's easier to understand with an example.

EXAMPLE

A clear example of using the semantic core in practice:

1. Home page(HF) – high-frequency request – “site promotion”.

2. Section pages (SP) – “custom website promotion”, “independent promotion”, “site promotion with articles”, “site promotion with links”. Or simply (if adapted for the menu):

Section No. 1 - “to order”
Section No. 2 – “on your own”
Section No. 3 – “article promotion”
Section No. 4 – “link promotion”

This is all very similar to the data structure on your computer: logical drive(main) - folders (sections) - files (articles).

3. Pages of articles and publications (AP) - “quick site promotion for free”, “cheap promotion to order”, “how to promote a site with articles”, “promotion of a project on the Internet to order”, “inexpensive site promotion with links”, etc. .

On this list you will have the most large number a wide variety of phrases and phrases that you will use to create further publications on the site.

How to use a ready-made semantic core in practice

Using a query list is internal content optimization. The secret is to optimize (adjust) each page of a web resource to the corresponding core item. That is, in fact, you take a key phrase and write the most relevant article and page for it. A special service will help you assess the relevance, available at the following link:

In order to have at least some guidelines in your SEO work, it is better to first check the relevance of sites from the TOP search results for specific queries.

For example, if you are writing text on the low-frequency phrase “inexpensive website promotion with links,” then first simply enter it in the search and evaluate the TOP 5 sites in the search results using the relevance assessment service.

If the service showed that sites from the TOP 5 for the query “inexpensive website promotion with links” have a relevance of 18% to 30%, then you need to focus on the same percentages. Even better is to create a unique text with keywords and relevance of about 35-50%. By slightly beating your competitors at this stage, you will lay a good foundation for further advancement.

IMPORTANT: using the semantic core in practice implies that one phrase corresponds to one unique resource page. The maximum here is 2 requests per article.

The more fully the semantic core is revealed, the more informative your project will be. But if you are not ready for long work and thousands of new articles, there is no need to take on broad thematic niches. Even a narrow specialized area, developed 100%, will bring more traffic than an unfinished large website.

For example, you could take as the basis of the site not the high-frequency key “site promotion” (where there is enormous competition), but a phrase with a lower frequency and narrower specialization - “article site promotion” or “link promotion”, but reveal this topic to the maximum in all articles on the virtual platform! The effect will be higher.

Useful information for the future

Further use of your semantic core in practice will consist only of:

  • adjust and update the list;
  • write optimized texts with high relevance and uniqueness;
  • publish articles on the website (1 request – 1 article);
  • increase the usefulness of the material (edit ready-made texts);
  • improve the quality of articles and the site as a whole, monitor competitors;
  • mark in the kernel list those queries that have already been used;
  • complement optimization with other internal and external factors(links, usability, design, usefulness, videos, online help tools).

Note: The above is a very simplified version of the events. In fact, based on the kernel, sublevels, deep nesting structures, and branches into forums, blogs, and chats can be created. But the principle will always be the same.

PRESENT: useful tool to collect the kernel in the Mozilla FireFox browser -

The semantic core is a scary name that SEOs came up with to denote a rather simple thing. We just need to select the key queries for which we will promote our site.

And in this article I will show you how to correctly compose a semantic core so that your site quickly reaches the TOP, and does not stagnate for months. There are also “secrets” here.

And before we move on to compiling the SY, let's figure out what it is and what we should ultimately come to.

What is the semantic core in simple words

Oddly enough, but the semantic core is the usual excel file, which lists the key queries for which you (or your copywriter) will write articles for the site.

For example, this is what my semantic core looks like:

I have marked in green those key queries for which I have already written articles. Yellow - those for which I plan to write articles in the near future. And colorless cells mean that these requests will come a little later.

For each key query, I have determined the frequency, competitiveness, and come up with a “catchy” title. You should get approximately the same file. Now my CN consists of 150 keywords. This means that I am provided with “material” for at least 5 months in advance (even if I write one article a day).

Below we will talk about what you should prepare for if you suddenly decide to order the collection of the semantic core from specialists. Here I will say briefly - they will give you the same list, but only for thousands of “keys”. However, in SY it is not quantity that is important, but quality. And we will focus on this.

Why do we need a semantic core at all?

But really, why do we need this torment? You can, after all, just write high-quality articles and attract an audience, right? Yes, you can write, but you won’t be able to attract people.

The main mistake of 90% of bloggers is simply writing high-quality articles. I'm not kidding, they have really interesting and useful materials. That's just search engines they don't know about it. They are not psychics, but just robots. Accordingly, they do not rank your article in the TOP.

There is another subtle point with the title. For example, you have a very high-quality article on the topic “How to properly conduct business in a face book.” There you describe everything about Facebook in great detail and professionally. Including how to promote communities there. Your article is the highest quality, useful and interesting on the Internet on this topic. No one was lying next to you. But it still won't help you.

Why high-quality articles fall from the TOP

Imagine that your site was visited not by a robot, but by a live inspector (assessor) from Yandex. He realized that you have the coolest article. And hands put you in first place in the search results for the request “Promoting a community on Facebook.”

Do you know what will happen next? You will fly out of there very soon anyway. Because no one will click on your article, even in first place. People enter the query “Promoting a community on Facebook,” and your headline is “How to properly run a business in a face book.” Original, fresh, funny, but... not on request. People want to see exactly what they were looking for, not your creativity.

Accordingly, your article will empty its place in the TOP search results. And a living assessor, an ardent admirer of your work, can beg the authorities as much as he likes to leave you at least in the TOP 10. But it won't help. All the first places will be taken by empty articles, like the husks of sunflower seeds, that yesterday’s schoolchildren copied from each other.

But these articles will have the correct “relevant” title - “Promoting a community on Facebook from scratch” ( step by step, in 5 steps, from A to Z, free etc.) Is it offensive? Of course. Well, fight against injustice. Let's create a competent semantic core so that your articles take the well-deserved first places.

Another reason to start writing SYNOPSIS right now

There is one more thing that for some reason people don’t think much about. You need to write articles often - at least every week, but preferably 2-3 times a week - to get more traffic, faster.

Everyone knows this, but almost no one does it. And all because they have “creative stagnation”, “they just can’t force themselves”, “they’re just lazy”. But in fact, the whole problem lies in the absence of a specific semantic core.

I entered one of my basic keys into the search field - “smm”, and Yandex immediately gave me a dozen hints about what else might be interesting to people who are interested in “smm”. All I have to do is copy these keys into a notebook. Then I will check each of them in the same way, and collect hints on them as well.

After the first stage of collecting SY, you should be able to text document, which will contain 10-30 broad basic keys, with which we will work further.

Step #2 — Parsing basic keys in SlovoEB

Of course, if you write an article for the request “webinar” or “smm”, then a miracle will not happen. You will never be able to reach the TOP for such a broad request. We need to break the basic key into many small queries on this topic. And we will do this using a special program.

I use KeyCollector, but it's paid. You can use a free analogue - the SlovoEB program. You can download it from the official website.

The most difficult thing about working with this program is setting it up correctly. I show you how to properly set up and use Sloboeb. But in that article I focus on selecting keys for Yandex Direct.

And here let’s look step by step at the features of using this program for creating a semantic core for SEO.

First, we create a new project and name it by the broad key that you want to parse.

I usually give the project the same name as my base key to avoid confusion later. And yes, I will warn you against one more mistake. Don't try to parse all base keys at once. Then it will be very difficult for you to filter out “empty” key queries from golden grains. Let's parse one key at a time.

After creating the project, we carry out the basic operation. That is, we actually parse the key through Yandex Wordstat. To do this, click on the “Worstat” button in the program interface, enter your base key, and click “Start collection”.

For example, let's parse the base key for my blog “contextual advertising”.

After this, the process will start, and after some time the program will give us the result - up to 2000 key queries that contain “contextual advertising”.

Also, next to each request there will be a “dirty” frequency - how many times this key (+ its word forms and tails) was searched per month through Yandex. But I do not advise drawing any conclusions from these figures.

Step #3 - Collecting the exact frequency for the keys

Dirty frequency will not show us anything. If you focus on it, then don’t be surprised when your key for 1000 requests does not bring a single click per month.

We need to identify pure frequency. And to do this, we first select all the found keys with checkmarks, and then click on the “Yandex Direct” button and start the process again. Now Slovoeb will look for the exact request frequency per month for each key.

Now we have an objective picture - how many times what query was entered by Internet users over the past month. I now propose to group all key queries by frequency to make it easier to work with them.

To do this, click on the “filter” icon in the “Frequency” column. ", and specify - filter out keys with the value "less than or equal to 10".

Now the program will show you only those requests whose frequency is less than or equal to the value “10”. You can delete these queries or copy them to another group of key queries for future use. Less than 10 is very little. Writing articles for these requests is a waste of time.

Now we need to select those key queries that will bring us more or less good traffic. And to do this, we need to find out one more parameter - the level of competitiveness of the request.

Step #4 — Checking the competitiveness of requests

All “keys” in this world are divided into 3 types: high frequency (HF), mid frequency (MF), low frequency (LF). They can also be highly competitive (HC), moderately competitive (SC) and low competitive (LC).

As a rule, HF requests are also VC. That is, if a query is often searched on the Internet, then there are a lot of sites that want to promote it. But this is not always the case; there are happy exceptions.

The art of compiling a semantic core lies precisely in finding queries that have a high frequency and a low level of competition. It is very difficult to manually determine the level of competition.

You can focus on indicators such as the number of main pages in the TOP 10, length and quality of texts. level of trust and tits of sites in the TOP search results upon request. All of this will give you some idea of ​​how tough the competition is for rankings for this particular query.

But I recommend you use Mutagen service. It takes into account all the parameters that I mentioned above, plus a dozen more that neither you nor I have probably even heard of. After analysis, the service issues exact value— what level of competition does this request have?

Here I checked the query “setting up contextual advertising in google adwords”. Mutagen showed us that this key has a competitiveness of “more than 25” - this is maximum value which he shows. And this query has only 11 views per month. So it definitely doesn’t suit us.

We can copy all the keys that we found in Slovoeb and do a mass check in Mutagen. After that, all we have to do is look through the list and take those requests that have a lot of requests and a low level of competition.

Mutagen is paid service. But you can do 10 checks per day for free. In addition, the cost of testing is very low. In all the time I have been working with him, I have not yet spent even 300 rubles.

By the way, about the level of competition. If you have a young site, then it is better to choose queries with a competition level of 3-5. And if you have been promoting for more than a year, then you can take 10-15.

By the way, regarding the frequency of requests. We now need to take the final step, which will allow you to attract a lot of traffic even for low-frequency queries.

Step #5 — Collecting “tails” for the selected keys

As has been proven and tested many times, your site will receive the bulk of traffic not from the main keywords, but from the so-called “tails”. This is when a person enters strange key queries into the search bar, with a frequency of 1-2 per month, but there are a lot of such queries.

To see the “tail”, simply go to Yandex and enter the key query of your choice into the search bar. Here's roughly what you'll see.

Now you just need to write down these additional words in a separate document and use them in your article. Moreover, there is no need to always place them next to the main key. Otherwise, search engines will see “over-optimization” and your articles will fall in the rankings.

Just use them in different places in your article, and then you will receive additional traffic from them as well. I would also recommend that you try to use as many word forms and synonyms as possible for your main key query.

For example, we have a request - “Setting up contextual advertising”. Here's how to reformulate it:

  • Setup = set up, make, create, run, launch, enable, place...
  • Contextual advertising = context, direct, teaser, YAN, adwords, kms. direct, adwords...

You never know exactly how people will search for information. Add all these additional words to your semantic core and use them when writing texts.

So, we collect a list of 100 - 150 key queries. If you are creating a semantic core for the first time, it may take you several weeks.

Or maybe break his eyes? Maybe there is an opportunity to delegate the compilation of FL to specialists who will do it better and faster? Yes, there are such specialists, but you don’t always need to use their services.

Is it worth ordering SY from specialists?

By and large, semantic core compilers will only give you steps 1 - 3 from our diagram. Sometimes, for a large additional fee, they will do steps 4-5 - (collecting tails and checking the competitiveness of requests).

After that, they will give you several thousand key queries that you will need to work with further.

And the question here is whether you are going to write the articles yourself, or hire copywriters for this. If you want to focus on quality rather than quantity, then you need to write it yourself. But then it won't be enough for you to just get a list of keys. You will need to choose topics that you understand well enough to write a quality article.

And here the question arises - why then do we actually need specialists in FL? Agree, parsing the base key and collecting exact frequencies (steps #1-3) is not at all difficult. This will literally take you half an hour.

The most difficult thing is to choose HF requests that have low competition. And now, as it turns out, you need HF-NK, which you can write to good article. This is exactly what will take you 99% of your time working on the semantic core. And no specialist will do this for you. Well, is it worth spending money on ordering such services?

When are the services of FL specialists useful?

It’s another matter if you initially plan to attract copywriters. Then you don't have to understand the subject of the request. Your copywriters won’t understand it either. They will simply take several articles on this topic and compile “their” text from them.

Such articles will be empty, miserable, almost useless. But there will be many of them. On your own, you can write a maximum of 2-3 quality articles per week. And an army of copywriters will provide you with 2-3 shitty texts a day. At the same time, they will be optimized for requests, which means they will attract some traffic.

In this case, yes, calmly hire FL specialists. Let them also draw up a technical specification for copywriters at the same time. But you understand, this will also cost some money.

Resume

Let's go over the main ideas in the article again to reinforce the information.

  • The semantic core is simply a list of key queries for which you will write articles on the site for promotion.
  • It is necessary to optimize texts for precise key queries, otherwise even your highest-quality articles will never reach the TOP.
  • SY is like a content plan for social networks. It helps you avoid falling into a “creative crisis” and always know exactly what you will write about tomorrow, the day after tomorrow and in a month.
  • To compile a semantic core it is convenient to use free program Word fucker, you just need her.
  • Here are the five steps of compiling the NL: 1 - Selection of basic keys; 2 - Parsing basic keys; 3 - Collection of exact frequency for requests; 4 — Checking the competitiveness of keys; 5 – Collection of “tails”.
  • If you want to write articles yourself, then it is better to create a semantic core yourself, for yourself. Specialists in the preparation of synonyms will not be able to help you here.
  • If you want to work on quantity and use copywriters to write articles, then it is quite possible to delegate and compile the semantic core. If only there was enough money for everything.

I hope this instruction was useful to you. Save it to your favorites so as not to lose it, and share it with your friends. Don't forget to download my book. There I show you the fastest way from zero to the first million on the Internet (extract from personal experience in 10 years =)

See you soon!

Yours Dmitry Novoselov

The semantic core is a set of search phrases and words used to promote the site. These search words and phrases help robots determine the topic of a page or an entire service, that is, find out what the company does.

In the Russian language, semantics is a branch of the science of language that studies the semantic content of lexical units of a language. In relation to search engine optimization this means that the semantic core is the semantic content of the resource. It helps to decide what information to convey to users and in what manner. Therefore, semantics is the foundation, the basis of all SEO.

Why do you need a semantic core of a website and how to use it?

  • The correct semantic core is necessary to accurately calculate the cost of promotion.
  • Semantics is a vector for building internal SEO optimization: the most relevant queries are selected for each service or product so that users and search robots can find them better.
  • Based on it, the site structure and texts for thematic pages are created.
  • Keys from semantics are used to write snippets (short descriptions of the page).

Here is the semantic core - an example of how it was compiled in a company website for a construction company website:

The optimizer collects semantics, parses it into logical blocks, finds out the number of impressions and, based on the cost of queries in the top Yandex and Google, calculates the total cost of promotion.

Of course, when selecting a semantic core, the specifics of the company’s work are taken into account: for example, if the company did not design and build houses from laminated veneer lumber, we would delete the corresponding queries and not use them in the future. Therefore, an obligatory stage of working with semantics is its coordination with the customer: no one knows the specifics of the company’s work better than him.

Types of Keywords

There are several parameters by which key queries are classified.

  1. By frequency
    • high-frequency – words and phrases with a frequency of 1000 impressions per month;
    • mid-frequency – up to 1000 impressions per month;
    • low-frequency – up to 100 impressions.
  2. Frequency collection by keywords helps you find out what users most often request. But a high-frequency query is not necessarily a highly competitive query, and composing semantics with high frequency and low competitiveness is one of the main aspects in working with the semantic core.

  3. By type:
    • geo-dependent and non-geo-dependent – ​​promotions tied to the region and not tied;
    • informational – from them the user receives some information. Keys of this type are usually used in articles - for example, reviews or useful tips;
    • branded – contain the name of the promoted brand;
    • transactional – implying an action from the user (buy, download, order) and so on.
  4. Other types are those that are difficult to classify as any type: for example, a “profiled beam” key. By typing such a query into a search engine, the user can mean anything: purchasing timber, properties, comparisons with other materials, etc.

    From the experience of our company, we can say that it is very difficult to promote any website based on such requests - as a rule, they are high-frequency and highly competitive, and this is not only difficult to optimize, but also expensive for the client.

How to collect a semantic core for a website?

  • By analyzing competitor sites (in SEMrush, SerpStat you can see the semantic core of competitors):

The process of compiling a semantic core

The collected queries are not yet a semantic core; here we still need to separate the wheat from the chaff so that all queries are relevant to the client’s services.

To create a semantic core, queries need to be clustered (divided into blocks according to the logic of service provision). This can be done using programs (for example, KeyAssort or TopSite) - especially if the semantics are voluminous. Or manually evaluate and iterate through the entire list and remove unsuitable queries.

Then send it to the client and check if there are any errors.

A ready-made semantic core is a yellow brick path to the content plan, blog articles, texts for product cards, company news, and so on. This is a table of audience needs that you can satisfy using your website.

  • Distribute the keys across pages.
  • Use keywords in meta tags , <description>, <h>(especially in the first level H1 heading).</li> <li>Insert keys into texts for pages. This is one of the white hat optimization methods, but it is important not to overdo it: overspam can result in search engine filters.</li> <li>Save the remaining search queries and those that do not fit into any section under the title “What else to write about.” You can use them for informational articles in the future.</li> <li>And remember: you need to focus on the requests and interests of users, so trying to cram all the keys into one text is pointless</li> </ul><h2>Collecting a semantic core for a website: main mistakes</h2> <ul><li>Refusal of highly competitive keys. Yes, perhaps, to the top upon request <i>“buy profiled timber”</i> you won’t get it (and it won’t stop you from successfully selling your services), but you still need to include it in your texts.</li> <li>Refusal of low frequencies. This is wrong for the same reason as rejecting highly competitive requests.</li> <li>Creating pages for requests and for the sake of requests. <i>"Buy profiled timber"</i> And <i>“order profiled timber”</i>- essentially the same thing, break them down by <a href="https://jolly-me.ru/en/obnovlenie/zakryt-kommentarii-ot-indeksacii-zakryvaem-ssylki-ot/">individual pages</a> there's no point.</li> <li>Absolute and unconditional trust in the software. You can’t do without SEO programs, but manual analysis and data verification are necessary. And no program can yet assess the industry, the level of competition and distribute keys without errors.</li> <li>Keys are our everything. No, our everything is a convenient, understandable website and useful content. Any text needs keys, but if the text is bad, then the keys will not save you.</li> </ul> <script>document.write("<img style='display:none;' src='//counter.yadro.ru/hit;artfast_after?t44.1;r"+ escape(document.referrer)+((typeof(screen)=="undefined")?"": ";s"+screen.width+"*"+screen.height+"*"+(screen.colorDepth? screen.colorDepth:screen.pixelDepth))+";u"+escape(document.URL)+";h"+escape(document.title.substring(0,150))+ ";"+Math.random()+ "border='0' width='1' height='1' loading=lazy loading=lazy>");</script> </div> </div> </div> <div class="td-pb-span4 td-main-sidebar" role="complementary"> <div class="td-ss-main-sidebar"> </div> </div> </div> </div> </article> <script type="text/javascript"> try { var sbmt = document.getElementById('submit'), npt = document.createElement('input'), d = new Date(), __ksinit = function() { sbmt.parentNode.insertBefore(npt, sbmt); }; npt.value = d.getUTCDate() + '' + (d.getUTCMonth() + 1) + 'uniq9065'; npt.name = 'ksbn_code'; npt.type = 'hidden'; sbmt.onmousedown = __ksinit; sbmt.onkeypress = __ksinit; } catch (e) {} </script> <div class="td-sub-footer-container td-container-wrap "> <div class="td-container "> <div class="td-pb-row "> <div class="td-pb-span td-sub-footer-menu "></div> <div class="td-pb-span td-sub-footer-copy ">2024 jolly-me.ru. Setting up and optimizing operating systems</div> </div> </div> </div> </div> <script data-cfasync="false" type="text/javascript"> if (window.addthis_product === undefined) { window.addthis_product = "wpwt"; } if (window.wp_product_version === undefined) { window.wp_product_version = "wpwt-3.1.2"; } if (window.wp_blog_version === undefined) { window.wp_blog_version = "4.9.1"; } if (window.addthis_share === undefined) { window.addthis_share = {}; } if (window.addthis_config === undefined) { window.addthis_config = { "data_track_clickback": true, "ui_language": "ru", "ui_atversion": "300" }; } if (window.addthis_plugin_info === undefined) { window.addthis_plugin_info = { "info_status": "enabled", "cms_name": "WordPress", "plugin_name": "Website Tools by AddThis", "plugin_version": "3.1.2", "plugin_mode": "AddThis", "anonymous_profile_id": "wp-f2d21fd70bfc0c32605b4e5e1e4ff912", "page_info": { "template": "posts", "post_type": "" }, "sharing_enabled_on_post_via_metabox": false }; } (function() { var first_load_interval_id = setInterval(function() { if (typeof window.addthis !== 'undefined') { window.clearInterval(first_load_interval_id); if (typeof window.addthis_layers !== 'undefined' && Object.getOwnPropertyNames(window.addthis_layers).length > 0) { window.addthis.layers(window.addthis_layers); } if (Array.isArray(window.addthis_layers_tools)) { for (i = 0; i < window.addthis_layers_tools.length; i++) { window.addthis.layers(window.addthis_layers_tools[i]); } } } }, 1000) }()); </script> <script type='text/javascript'> var tocplus = { "smooth_scroll": "1", "visibility_show": "\u043f\u043e\u043a\u0430\u0437\u0430\u0442\u044c", "visibility_hide": "\u0441\u043a\u0440\u044b\u0442\u044c", "width": "Auto" }; </script> <script type='text/javascript' src='https://jolly-me.ru/wp-content/plugins/disqus-comment-system/media/js/disqus.js?ver=bbebb9a04042e1d7d3625bab0b5e9e4f'></script> <script> (function() { var html_jquery_obj = jQuery('html'); if (html_jquery_obj.length && (html_jquery_obj.is('.ie8') || html_jquery_obj.is('.ie9'))) { var path = '/wp-content/themes/Newspaper/style.css'; jQuery.get(path, function(data) { var str_split_separator = '#td_css_split_separator'; var arr_splits = data.split(str_split_separator); var arr_length = arr_splits.length; if (arr_length > 1) { var dir_path = '/wp-content/themes/Newspaper'; var splited_css = ''; for (var i = 0; i < arr_length; i++) { if (i > 0) { arr_splits[i] = str_split_separator + ' ' + arr_splits[i]; } //jQuery('head').append('<style>' + arr_splits[i] + '</style>'); var formated_str = arr_splits[i].replace(/\surl\(\'(?!data\:)/gi, function regex_function(str) { return ' url(\'' + dir_path + '/' + str.replace(/url\(\'/gi, '').replace(/^\s+|\s+$/gm, ''); }); splited_css += "<style>" + formated_str + "</style>"; } var td_theme_css = jQuery('link#td-theme-css'); if (td_theme_css.length) { td_theme_css.after(splited_css); } } }); } })(); </script> <div id="tdw-css-writer" style="display: none" class="tdw-drag-dialog tdc-window-sidebar"> <header> <a title="Editor" class="tdw-tab tdc-tab-active" href="#" data-tab-content="tdw-tab-editor">Edit with Live CSS</a> <div class="tdw-less-info" title="This will be red when errors are detected in your CSS and LESS"></div> </header> <div class="tdw-content"> <div class="tdw-tabs-content tdw-tab-editor tdc-tab-content-active"> <script> (function(jQuery, undefined) { jQuery(window).ready(function() { if ('undefined' !== typeof tdcAdminIFrameUI) { var $liveIframe = tdcAdminIFrameUI.getLiveIframe(); if ($liveIframe.length) { $liveIframe.load(function() { $liveIframe.contents().find('body').append('<textarea class="tdw-css-writer-editor" style="display: none"></textarea>'); }); } } }); })(jQuery); </script> <textarea class="tdw-css-writer-editor td_live_css_uid_1_5a5dc1e76f1d6"></textarea> <div id="td_live_css_uid_1_5a5dc1e76f1d6" class="td-code-editor"></div> <script> jQuery(window).load(function() { if ('undefined' !== typeof tdLiveCssInject) { tdLiveCssInject.init(); var editor_textarea = jQuery('.td_live_css_uid_1_5a5dc1e76f1d6'); var languageTools = ace.require("ace/ext/language_tools"); var tdcCompleter = { getCompletions: function(editor, session, pos, prefix, callback) { if (prefix.length === 0) { callback(null, []); return } if ('undefined' !== typeof tdcAdminIFrameUI) { var data = { error: undefined, getShortcode: '' }; tdcIFrameData.getShortcodeFromData(data); if (!_.isUndefined(data.error)) { tdcDebug.log(data.error); } if (!_.isUndefined(data.getShortcode)) { var regex = /el_class=\"([A-Za-z0-9_-]*\s*)+\"/g, results = data.getShortcode.match(regex); var elClasses = {}; for (var i = 0; i < results.length; i++) { var currentClasses = results[i] .replace('el_class="', '') .replace('"', '') .split(' '); for (var j = 0; j < currentClasses.length; j++) { if (_.isUndefined(elClasses[currentClasses[j]])) { elClasses[currentClasses[j]] = ''; } } } var arrElClasses = []; for (var prop in elClasses) { arrElClasses.push(prop); } callback(null, arrElClasses.map(function(item) { return { name: item, value: item, meta: 'in_page' } })); } } } }; languageTools.addCompleter(tdcCompleter); window.editor = ace.edit("td_live_css_uid_1_5a5dc1e76f1d6"); // 'change' handler is written as function because it's called by tdc_on_add_css_live_components (of wp_footer hook) // We did it to reattach the existing compiled css to the new content received from server. window.editorChangeHandler = function() { //tdwState.lessWasEdited = true; window.onbeforeunload = function() { if (tdwState.lessWasEdited) { return "You have attempted to leave this page. Are you sure?"; } return false; }; var editorValue = editor.getSession().getValue(); editor_textarea.val(editorValue); if ('undefined' !== typeof tdcAdminIFrameUI) { tdcAdminIFrameUI.getLiveIframe().contents().find('.tdw-css-writer-editor:first').val(editorValue); // Mark the content as modified // This is important for showing info when composer closes tdcMain.setContentModified(); } tdLiveCssInject.less(); }; editor.getSession().setValue(editor_textarea.val()); editor.getSession().on('change', editorChangeHandler); editor.setTheme("ace/theme/textmate"); editor.setShowPrintMargin(false); editor.getSession().setMode("ace/mode/less"); editor.setOptions({ enableBasicAutocompletion: true, enableSnippets: true, enableLiveAutocompletion: false }); } }); </script> </div> </div> <footer> <a href="#" class="tdw-save-css">Save</a> <div class="tdw-more-info-text">Write CSS OR LESS and hit save. CTRL + SPACE for auto-complete.</div> <div class="tdw-resize"></div> </footer> </div> <script type="text/javascript" defer src="https://jolly-me.ru/wp-content/cache/autoptimize/js/autoptimize_d85127d8732b44d62e81e0455b3d3cb7.js"></script> </body> </html> <script data-cfasync="false" src="/cdn-cgi/scripts/5c5dd728/cloudflare-static/email-decode.min.js"></script>