BeautifulSoup实例

前面我们已经简单的使用BeautifulSoup,接下来我们以腾讯社招页面来做演示:https://hr.tencent.com/position.php?&start=10#a

使用BeautifuSoup4解析器,将招聘网页上的职位名称、职位类别、招聘人数、工作地点、发布时间,以及每个职位详情的点击链接存储出来。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import requests
from bs4 import BeautifulSoup
import json # 使用了json格式存储

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
}


def tencent():
url = 'https://hr.tencent.com/position.php?&start=10#a'
response = requests.get(url, headers=headers)
html = response.text

html = BeautifulSoup(html, 'lxml')

# 创建CSS选择器
result = html.select('tr[class="even"]')
result2 = html.select('tr[class="odd"]')
result += result2

items = []
for site in result:
item = {}

name = site.select('td a')[0].get_text()
detailLink = site.select('td a')[0].attrs['href']
catalog = site.select('td')[1].get_text()
recruitNumber = site.select('td')[2].get_text()
workLocation = site.select('td')[3].get_text()
publishTime = site.select('td')[4].get_text()

item['name'] = name
item['detailLink'] = url + detailLink
item['catalog'] = catalog
item['recruitNumber'] = recruitNumber
item['publishTime'] = publishTime

items.append(item)

# 禁用ascii编码,按utf-8编码
line = json.dumps(items, ensure_ascii=False)

with open('tencent.json', 'w', encoding='utf-8') as f:
f.write(line)


if __name__ == "__main__":
tencent()
-------------本文结束感谢您的阅读-------------

本文标题:BeautifulSoup实例

文章作者:GavinLiu

发布时间:2018年05月02日 - 22:05

最后更新:2018年05月02日 - 23:05

原始链接:http://gavinliu4011.github.io/post/f484e02f.html

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。

请博主吃个鸡腿吧
0%