Use item[(name!r)] = (value!r) to set field value
Package:
scrapy
41445

Exception Class:
AttributeError
Raise code
def __getattr__(self, name):
if name in self.fields:
raise AttributeError(f"Use item[{name!r}] to get field value")
raise AttributeError(name)
def __setattr__(self, name, value):
if not name.startswith('_'):
raise AttributeError(f"Use item[{name!r}] = {value!r} to set field value")
super().__setattr__(name, value)
def __len__(self):
return len(self._values)
def __iter__(self):
return iter(self._values)
Links to the raise (1)
https://github.com/scrapy/scrapy/blob/ee682af3b06d48815dbdaa27c1177b94aaf679e1/scrapy/item.py#L112Ways to fix
Steps to reproduce:
$ mkdir test-scrapy $ cd test-scrapy $ pipenv shell $ pipenv install scrapy $ pipenv run scrapy startproject myscraper $ cd myscraper/myscraper/spiders $ pipenv run scrapy genspider test test.com
Now, edit the items.py file under the myscraper directory. Your MyscraperItem class inside items.py should look like this:
class MyscraperItem(scrapy.Item):
title = scrapy.Field()
Now, edit the spiders/test.py to look something like this:
class TestSpider(scrapy.Spider):
name = 'test'
allowed_domains = ['test.com']
start_urls = ['http://test.com/']
def parse(self, response):
item = MyscraperItem()
item.title = 'abc'
yield item
Run the spider crawler using the command:
$ pipenv run scrapy crawl test
After running the command, the test spider generates the following exception:
2021-06-23 02:24:23 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.test.com/> (referer: None)
Traceback (most recent call last):
.......
File "/home/user/test-scrapy/myscraper/myscraper/spiders/test.py", line 11, in parse
item.title = 'abc'
File "/home/user/test-scrapy/venv/lib/python3.9/site-packages/scrapy/item.py", line 112, in __setattr__
raise AttributeError(f"Use item[{name!r}] = {value!r} to set field value")
AttributeError: Use item['title'] = 'abc' to set field value
2021-06-23 02:11:57 [scrapy.core.engine] INFO: Closing spider (finished)
Fixed version of code:
To fix it, update your TestSpider class inside the spiders/test.py file to look something like this:
class TestSpider(scrapy.Spider):
name = 'test'
allowed_domains = ['test.com']
start_urls = ['http://test.com/']
def parse(self, response):
item = MyscraperItem()
item['title'] = 'abc'
yield item
Run the spider crawler again:
$ pipenv run scrapy crawl test
Add a possible fix
Please authorize to post fix