Create your own sitemap for dynamic content with python

Create your own sitemap for dynamic content with python

Sitemaps for Dynamic Content

As you may know, sitemaps are an important part of any website or application. A sitemap is a useful list of all the pages that are part of a website and it helps search engines to crawl them indicating its internal structure.
This blog focuses on showing you how to create a sitemap that will include dynamic pages. First of all, what is dynamic content? It is the type of content that is constantly adapting its information according to the way the user interacts with the page; using some parameters that determine the content that is shown.

Table of contents

In essence, you have a page that can load different kinds of information; for example, a user page that receives params like user_id or slug_user ( or or

Although this information is continually changing, you may want to add them to your sitemap. It’s important to note that is not going to affect your rankings but definitely can help search engines to crawl those pages and make it easier to index them. Now, you can make a sitemap for dynamic content with Python using a simple script to crawl your site and create a XML file. We will show you how to implement in more detail in this blog.

How to create a sitemap with Python

1. We’ll use python 3.x, MySQL, Linux, some python modules (included below) and your XSL stylesheet for your XML file is optional.

2. In our python script we will include the modules that we are going to need, so the easiest way to install them is via pip. You can install pip with the next command:

install python command

How do I use pip command?
To install new python package type:

pip install <packageName> 

To uninstall python package installed by pip type:

pip uninstall <packageName>

so this is the basics about PIP.

3. Then, when we have pip installed, we are going to install the following packages:

  1. MySQLdb.
  2. xml.
  3. lxml.
  4. StringIO.
  5. datetime.

3.1 And we are going to import these modules on our python script.

python script for sitemap

* note: sys is not required to install via PIP the python libraries also include this module.

[ import MySQLdb
import xml.etree.cElementTree as ET
from lxml import etree
import datetime
import sys
from xml.dom import minidom
from xml.dom.minidom import parse
from StringIO import StringIO ]

4. next we are going to make a function that will help us to get the products from our database (DB),

[def get_products():
db = MySQLdb.connect(host='yourhost', port=3306,
user='youruser', passwd='somRanD0mPass',
cursor = db.cursor()
sql = \

select product_id from products where status = 1;


  1. cursor.execute(sql)
  2. results = cursor.fetchall()
  3. products = []
  4. for result in results:
  5. products.append(‘
  6. + str(result[0]))
  7. except (RuntimeError, TypeError, NameError):
  8. print RuntimeError + TypeError + NameError
  9. db.close()
  10. return products]
get products

So, here is the explanation of this piece of code:
Our variable DB receives the MySQLdb.connect object; inside of that function, we are going to set your params according to your database authentication, like a host, port, user, password, and your DB. Next, we declare another variable that will receive the object db.cursor that help us to manage the current DB connection. Then we report our SQL query, in this case, we have in our system that the page products just receive the product_id stored in our database.

*important we need to declare the try catch clause

5. Then we use the cursor object with the execute method and pass the SQL query text variable, in the next line the results variable get all the products obtained from our query; for this, we are going to need an array to store the information.

5.1 next we use a for loop to go through every product in the results object, so the products.append function allow us to add the result to the products array. In this case, we add the URL by concatenating the product id, you can see result[0] which is the product id.

5.2 Now, the except clause is our catch method that follows our try, in python we use the try: except. The params that this clause receives are the time when the error happens, the error-type, and the name’s error, this is for debugging purposes, and next we print the error variables.

5.3 if there is an error, then we close the DB connection and return the products array which contains the product URLs.

6. The next case is about the same function, but it just changes the URL format that will be performed for categories that receive the slug_url.

  1. [def get_categories():
  2. db = MySQLdb.connect(host=’yourhost’, port=3306,
  3. user=’youruser’, passwd=’somRanD0mPass’,
  4. db=’yourDB’)
  5. cursor = db.cursor()
  6. sql = \
  7. “””
  8. select slug_url from categories;
  9. “””
  10. try:
  11. categories = []
  12. cursor.execute(sql)
  13. results = cursor.fetchall()
  14. for result in results:
  15. categories.append(‘
  16. + str(result[0]))
  17. except (RuntimeError, TypeError, NameError):
  18. print RuntimeError + TypeError + NameError
  19. db.close()
  20. return categories]
get categories

Here is an example of the URL type:
So, the explanation about it is the same as above.

7. Also, we can have extra or static URLs like: about page, contact us, etc…
e. g. or So, we’re going to do this by writing the following code:

get extras

*This is useful for that content which doesn’t come from DB.

  1. [def get_extras():
  2. extras = []
  3. extras.append(‘‘)
  4. extras.append(‘‘)
  5. return extras]

8. Ok, we are on the tricky part. Here we’re going to create the function that will parse all data to XML format as python XML requires.

[def create_sitemap(categories, products, extras):
root = ET.Element('urlset')

dt = ("%Y-%m-%d")
doc = ET.SubElement(root, "url")
ET.SubElement(doc, "loc").text = ""
ET.SubElement(doc, "lastmod").text = dt
ET.SubElement(doc, "changefreq").text = "weekly"
ET.SubElement(doc, "priority").text = "1.0"

for product in products:

doc = ET.SubElement(root, "url")
ET.SubElement(doc, "loc").text = product
ET.SubElement(doc, "lastmod").text = dt
ET.SubElement(doc, "changefreq").text = "weekly"
ET.SubElement(doc, "priority").text = "0.8"

for category in categories:

doc = ET.SubElement(root, "url")
ET.SubElement(doc, "loc").text = category
ET.SubElement(doc, "lastmod").text = dt
ET.SubElement(doc, "changefreq").text = "weekly"
ET.SubElement(doc, "priority").text = "0.6"

for extra in extras:

doc = ET.SubElement(root, "url")
ET.SubElement(doc, "loc").text = extra
ET.SubElement(doc, "lastmod").text = dt
ET.SubElement(doc, "changefreq").text = "weekly"
ET.SubElement(doc, "priority").text = "0.5"

tree = ET.ElementTree(root)
tree.write('sitemap.xml', encoding='utf-8', xml_declaration=True)
return True
except (RuntimeError, TypeError, NameError):
print RuntimeError + TypeError + NameError
return False]
create sitemap

8.1In this code fragment we create the XML file and assign the correct tags for our sitemap.
You’ll see in the first lines:


This will add the next XML tags to our sitemap:

8. 2 The “dt” variable gets the actual time in year-month-day format, and it’ll be added to each XML block.
The “doc = ET.SubElement(root, “url”)” specifies a new xml block that will contain the next info.

  1. ET.SubElement(doc, “loc”).text = “
  2. ET.SubElement(doc, “lastmod”).text = dt
  3. ET.SubElement(doc, “changefreq”).text = “weekly”
  4. ET.SubElement(doc, “priority”).text = “1.0”

This code is for the main URL, which will contain the URL as loc (location), the datetime as lastmod (last modification), weekly as changefreq (change frequency). The priority level, in this case, is 1.0, it will depend on your criteria.

8.3 The next lines create the same structure, including those arrays that contain all our info gotten from our DB. Also, the output will be a file named sitemap.xml that will have the next basic structure.

  1. [
  2. <?xml-stylesheet type=”text/xsl” href=”sitemap.xsl”?>
  3. <urlset xmlns=”” xmlns:xsi=”” xsi:schemaLocation=”“>
  4. <url>
  5. <loc></loc>
  6. <lastmod>2016-11-03</lastmod>
  7. <changefreq>weekly</changefreq>
  8. <priority>1.0</priority>
  9. </url>
  10. <url>
  11. <loc></loc>
  12. <lastmod>2016-11-03</lastmod>
  13. <changefreq>weekly</changefreq>
  14. <priority>0.8</priority>
  15. </url>]

9. Now we are going to create our primary function that will call the other methods, those we already see.

  1. [if __name__ == ‘__main__’:
  2. categories = get_categories()
  3. products = get_products()
  4. extras = get_extras()
  5. created=create_sitemap(categories, products, extras)
  6. if created==True:
  7. print “created”
  8. add_styleshet()
  9. prettyPrintXml(“./sitemap.xml”)
  10. else:
  11. print “failed”]

Ok, so first, you’ll see the variables categories, products and extras, they will receive the arrays from their respective functions. Next, we will invoke the create_sitemap function and pass the variables to be parsed and added to our XML file named as “sitemap.xml”. At this point, we have a condition, in case that the sitemap has been created or not. We also have two more functions “add_stylesheet” and “prettyPrintXml”. These will be optional, because if you already have a stylesheet for your XML file, I recommend you to add it, and also to pretty Print your XML file. Here is the code for both functions:

def prettyPrintXml(xmlFilePathToPrettyPrint):
assert xmlFilePathToPrettyPrint is not None
parser = etree.XMLParser(resolve_entities=False, strip_cdata=False)
document = etree.parse(xmlFilePathToPrettyPrint, parser)
document.write(xmlFilePathToPrettyPrint, pretty_print=True, encoding='utf-8')

This function will help us to format the XML file in a readable way.

[def add_styleshet():
doc = parse('sitemap.xml')
pi = doc.createProcessingInstruction('xml-stylesheet', 'type="text/xsl" href="sitemap.xsl"')
doc.insertBefore(pi, doc.firstChild)
file = open('sitemap.xml', 'w')

If you have your XSL file to add style to the XML file, just replace the name in the second line on the href tag, as easy as that.

Why you should have a sitemap for your dynamic content?

As we saw, in this blog we have learned how to create our sitemap file for dynamic content that was previously recorded in our database. This kind of files are very helpful to index entirely our site and to get more visits with better ranking in Google, or any search engine. Another tip to recommend you is to set correctly the meta tags for description, title and keywords of the page; and thus let our visitors see how is in a short brief the main content of it.

Here is an example to see how are the title and description indexed in Google.

page indexed

Now you know how to do it, I hope you apply it right away since is an essential part of your SEO practice. In a few days I will be posting more about better SEO management in our systems. Remember that better SEO means more visitors and that ClickIT is always your best option when it is about IT services. Contact us!

let's start a new project together


to our newsletter

Table of Contents

We Make DevOps Easier

Weekly DevOps Newsletter

Subscribe to our DevOps News

Subscribe to a monthly newsletter to receive the IT best practices, startup-related insights & emerging technologies.

Join hundreds of business leaders and entrepreneurs, who are part of our growing tech community.

We guarantee 100% privacy. Your information will not be shared.