f = urllib2.urlopen(url)
req = f.read()
soup = BeautifulSoup(req)
content = soup.findAll(attrs={"name":"readonlycounter2"})
subId = content[0].string.split(',')[1]
subName = soup.html.body.h1.span.string
content = soup.findAll(attrs={"class":"subdes_td"})
subType = content[0].string
subLeg = content[1].string
content = soup.findAll(attrs={"colspan":"3"})
subTime = content[2].string
subFile = content[7].div.string
~如果你认可我的回答,请及时点击【采纳为满意回答】按钮
~~手机提问的朋友在客户端右上角评价点【满意】即可。
~你的采纳是我前进的动力
~~O(∩_∩)O,记得好评和采纳,互相帮助,谢谢。
f = urllib2.urlopen(url)
req = f.read()
soup = BeautifulSoup(req)
content = soup.findAll(attrs={"name":"readonlycounter2"})
subId = content[0].string.split(',')[1]
subName = soup.html.body.h1.span.string
content = soup.findAll(attrs={"class":"subdes_td"})
subType = content[0].string
subLeg = content[1].string
content = soup.findAll(attrs={"colspan":"3"})
subTime = content[2].string
subFile = content[7].div.string