simple-robots
Go to documentationSimply manage the robots crawling your Nuxt app.
nuxt-simple-robots
Simply manage the robots crawling your Nuxt 3 app.
Status: v2 Released 🎉 Please report any issues 🐛 Made possible by my Sponsor Program 💖 Follow me @harlan_zw 🐦 • Join Discord for help |
ℹ️ Looking for a complete SEO solution? Check out Nuxt SEO Kit.
Features
- 🤖 Creates best practice robot data
- 🗿 Adds
X-Robots-Tag
header, robot meta tag and robots.txt - 🔄 Configure using route rules and hooks
- 🔒 Disables non-production environments from being crawled automatically
- Best practice default config
Zero Config Integrations
Will automatically add sitemap entries.
Install
npm install --save-dev nuxt-simple-robots# Using yarnyarn add --dev nuxt-simple-robots
Setup
nuxt.config.ts
export default defineNuxtConfig({ modules: [ 'nuxt-simple-robots', ],})
Set Site URL (required when prerendering)
For prerendered robots.txt that use sitemaps, you'll need to provide the URL of your site.
export default defineNuxtConfig({ // Recommended runtimeConfig: { public: { siteUrl: process.env.NUXT_PUBLIC_SITE_URL || 'https://example.com', } }, // OR robots: { siteUrl: 'https://example.com', },})
Using route rules
Using route rules, you can configure how your routes are indexed by search engines.
You can provide the following rules:
index: false
- Will disable the route from being indexed using therobotsDisabledValue
config (defaultnoindex, nofollow
)robots: <string>
- Will add robots the provided string as the robots rule
export default defineNuxtConfig({ routeRules: { // use the `index` shortcut for simple rules '/secret/**': { index: false }, // add exceptions for individual routes '/secret/visible': { index: true }, // use the `robots` rule if you need finer control '/custom-robots': { robots: 'index, follow' }, }})
The rules are applied using the following logic:
X-Robots-Tag
header - SSR only<meta name="robots">
- When using thedefineRobotMeta
orRobotMeta
composable or component/robots.txt
disallow entry - WhendisallowNonIndexableRoutes
is enabled
Meta Tags
By default, only the /robots.txt
and HTTP headers provided by server middleware will be used to control indexing.
It's recommended for SSG apps or to improve debugging, to add a meta tags to your page as well.
Within your app.vue or a layout:
<script lang="ts" setup>// Use Composition APIdefineRobotMeta()</script><template> <div> <!-- OR Component API --> <RobotMeta /> </div></template>
Module Config
siteUrl
- Type:
string
- Default:
process.env.NUXT_PUBLIC_SITE_URL || nuxt.options.runtimeConfig.public?.siteUrl
Used to ensure sitemaps are absolute URLs.
It's recommended that you use runtime config for this.
export default defineNuxtConfig({ runtimeConfig: { public: { // can be set with environment variables siteUrl: process.env.NUXT_PUBLIC_SITE_URL || 'https://example.com', } },})
indexable
- Type:
boolean
- Default:
process.env.NUXT_INDEXABLE || nuxt.options.runtimeConfig.indexable || process.env.NODE_ENV === 'production'
Whether the site is indexable by search engines.
It's recommended that you use runtime config for this.
export default defineNuxtConfig({ runtimeConfig: { // can be set with environment variables indexable: process.env.NUXT_INDEXABLE || false, },})
disallow
- Type:
string[]
- Default:
[]
- Required:
false
Disallow paths from being crawled.
sitemap
- Type:
string | string[] | false
- Default:
false
The sitemap URL(s) for the site. If you have multiple sitemaps, you can provide an array of URLs.
You must either define the runtime config siteUrl
or provide the sitemap as absolute URLs.
export default defineNuxtConfig({ robots: { sitemap: [ '/sitemap-one.xml', '/sitemap-two.xml', ], },})
robotsEnabledValue
- Type:
string
- Default:
'index, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1'
- Required:
false
The value to use when the site is indexable.
robotsDisabledValue
- Type:
string
- Default:
'noindex, nofollow'
- Required:
false
The value to use when the site is not indexable.
disallowNonIndexableRoutes
- Type:
boolean
- Default:
'false'
Should route rules which disallow indexing be added to the /robots.txt
file.
Nuxt Hooks
robots:config
Type: async (config: ModuleOptions) => void | Promise<void>
This hook allows you to modify the robots config before it is used to generate the robots.txt and meta tags.
export default defineNuxtConfig({ hooks: { 'robots:config': (config) => { // modify the config config.sitemap = '/sitemap.xml' }, },})
Nitro Hooks
robots:robots-txt
Type: async (ctx: { robotsTxt: string }) => void | Promise<void>
This hook allows you to modify the robots.txt content before it is sent to the client.
import { defineNitroPlugin } from 'nitropack/runtime/plugin'export default defineNitroPlugin((nitroApp) => { if (!process.dev) { nitroApp.hooks.hook('robots:robots-txt', async (ctx) => { // remove comments from robotsTxt in production ctx.robotsTxt = ctx.robotsTxt.replace(/^#.*$/gm, '').trim() }) }})
Sponsors
License
MIT License © 2022-PRESENT Harlan Wilton