forked from buriy/python-readability
-
Notifications
You must be signed in to change notification settings - Fork 3
/
regression_test.py
411 lines (342 loc) · 13.4 KB
/
regression_test.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
"""
This module provides a regression test for results of running the readability
algorithm on a variety of different real-world examples. For each page in the
test suite, a benchmark was captured that represents the current readability
results. Note that these are not necessarily ideal results, just the ones used
as a benchmark.
This allows you to tweak and change the readability algorithm and see how it
changes existing results, hopefully for the better.
Running the test
----------------
To run the regression suite:
$ python regression_test.py
This will generate a regression_test_output/ directory. Open
regression_test_output/index.html in a web browser to examine the results.
For each test, you can examine the original version, the benchmark readability
version, the current readability version (using your working code), and a diff
version between the benchmark and current versions.
You can run a subset of tests by using the '--case' option:
$ python regression_test.py --case arstechnica-000 --case slate-000
This invocation will only run the arstechnica-000 and slate-000 test cases.
This is handy for speeding up your testing cycle if you are working on specific
improvements.
Generating a new test case
--------------------------
Each test case is defined by a specification YAML file (test_name.yaml) and a
directory holding resources used by the test (test_name/). These both live in
the regression_test_data/ directory.
By far, the easiest way to create a new regression test case is by using the
gen_test.py program.
For example:
$ python gen_test.py create "http://foo.com/bar" foo-000 "foo article"
This will generate a new test case named 'foo-000' for the given URL with the
description "foo article". The benchmark for the test will be generated with
the current readability algorithm.
This program does what it can to locally store any resources used by the page.
For example, images used by the original page are downloaded so that the test
can run entirely locally, and results can be viewed complete with images.
There are cases where this will fail, though. For example, if the benchmark
does not pull in multi-page articles, and you later fix the algorithm to pull
them in, those extra pages and their dependencies will not be local.
Once generated, the test will automatically be included next time you run
regression_test.py.
This program can also be used to regenerate a benchmark result for an existing
test:
$ python gen_test.py genbench foo-000
If the new benchmark requires new external resources, like the multi-page
example mentioned above, use the --refetch option:
$ python gen_test.py genbench --refetch foo-000
Workflow
--------
Here is an example workflow for working on the readability algorithm:
1. Find a page that the algorithm does not handle well http://foo.com/bar
2. Generate a regression test case for it:
$ python gen_test.py create "http://foo.com/bar" foo-000 "foo article"
3. Work on the algorithm, re-running the regression to see your
improvements:
$ python regression_test.py --case foo-000
4. Periodically run the whole regression suite to make sure you are not
breaking the algorithm for other inputs.
5. Regenerate the benchmarks as necessary with your improved algorithm.
"""
from lxml.html import builder as B
from regression_test_css import SUMMARY_CSS, READABILITY_CSS
import argparse
import logging
import lxml.html
import lxml.html.diff
import os
import os.path
import re
import readability
import shutil
import sys
import urllib
import urlparse
import readability.urlfetch as urlfetch
import yaml
YAML_EXTENSION = '.yaml'
READABLE_SUFFIX = '.rdbl'
RESULT_SUFFIX = '.result'
DIFF_SUFFIX = '.diff'
TEST_DATA_PATH = 'regression_test_data'
TEST_OUTPUT_PATH = 'regression_test_output'
TEST_SUMMARY_PATH = os.path.join(TEST_OUTPUT_PATH, 'index.html')
class ReadabilityTest:
def __init__(
self,
dir_path,
enabled,
name,
url,
desc,
notes,
url_map
):
self.dir_path = dir_path
self.enabled = enabled
self.name = name
self.url = url
self.desc = desc
self.notes = notes
self.url_map = url_map
class ReadabilityTestData:
def __init__(self, test, orig_html, rdbl_html):
self.test = test
self.orig_html = orig_html
self.rdbl_html = rdbl_html
class ReadabilityTestResult:
def __init__(self, test_data, result_html, diff_html):
self.test_data = test_data
self.result_html = result_html
self.diff_html = diff_html
def read_yaml(path):
with open(path, 'r') as f:
return yaml.load(f)
def make_readability_test(dir_path, name, spec_dict):
enabled = spec_dict.get('enabled', True)
notes = spec_dict.get('notes', '')
url_map = spec_dict.get('url_map', dict())
return ReadabilityTest(
dir_path,
enabled,
name,
spec_dict['url'],
spec_dict['test_description'],
notes,
url_map
)
def load_test_data(test):
def read_data(suffix):
rel_path = test.url_map[test.url] + suffix
path = os.path.join(TEST_DATA_PATH, test.name, rel_path)
return open(path, 'r').read()
if test.enabled:
orig = read_data('')
rdbl = read_data(READABLE_SUFFIX)
return ReadabilityTestData(test, orig, rdbl)
else:
return None
def load_readability_tests(dir_path, files, cases):
yaml_files = [f for f in files if f.endswith(YAML_EXTENSION)]
yaml_paths = [os.path.join(dir_path, f) for f in yaml_files]
names = [re.sub('.yaml$', '', f) for f in yaml_files]
spec_dicts = [read_yaml(p) for p in yaml_paths]
return [
make_readability_test(dir_path, name, spec_dict)
for (name, spec_dict) in zip(names, spec_dicts)
if cases is None or name in cases
]
def execute_test(test_data):
if test_data is None:
return None
else:
base_path = os.path.join(TEST_DATA_PATH, test_data.test.name)
fetcher = urlfetch.MockUrlFetch(base_path, test_data.test.url_map)
doc = readability.Document(
test_data.orig_html,
url = test_data.test.url,
urlfetch = fetcher
)
summary = doc.summary()
diff = lxml.html.diff.htmldiff(test_data.rdbl_html, summary.html)
return ReadabilityTestResult(test_data, summary.html, diff)
def element_string_lengths(elems):
return [len(e.xpath('string()')) for e in elems]
class ResultSummary():
def __init__(self, result):
# logging.debug('diff: %s' % result.diff_html)
doc = lxml.html.document_fromstring(
'<html><body>' + result.diff_html + '</body></html>')
insertions = doc.xpath('//ins')
insertion_lengths = element_string_lengths(insertions)
self.insertions = sum(insertion_lengths)
self.insertion_blocks = len(insertions)
deletions = doc.xpath('//del')
deletion_lengths = element_string_lengths(deletions)
self.deletions = sum(deletion_lengths)
self.deletion_blocks = len(deletions)
# doc = lxml.html.fragment_fromstring('<div></div>')
# self.insertions = 0
# self.insertion_blocks = 0
# self.deletions = 0
# self.deletion_blocks = 0
def make_summary_row(test, result):
def output(suffix):
rel_path = test.url_map[test.url]
return urllib.quote(os.path.join(test.name, rel_path) + suffix)
if test.enabled:
s = ResultSummary(result)
return B.TR(
B.TD(test.name),
B.TD('%d (%d)' % (s.insertions, s.insertion_blocks)),
B.TD('%d (%d)' % (s.deletions, s.deletion_blocks)),
B.TD(
B.A('original', href = output('')),
' ',
B.A('benchmark', href = output(READABLE_SUFFIX)),
' ',
B.A('result', href = output(RESULT_SUFFIX)),
' ',
B.A('diff', href = output(DIFF_SUFFIX))
),
B.TD(test.notes)
)
else:
return B.TR(
B.CLASS('skipped'),
B.TD('%s (SKIPPED)' % test.name),
B.TD('N/A'),
B.TD('N/A'),
B.TD('N/A'),
B.TD(test.notes)
)
def make_summary_doc(tests_w_results):
tbody = B.TBODY(
B.TR(
B.TH('Test Name'),
B.TH('Inserted (in # of blocks)'),
B.TH('Deleted (in # of blocks)'),
B.TH('Links'),
B.TH('Notes')
)
)
for (test, result) in tests_w_results:
row = make_summary_row(test, result)
tbody.append(row)
return B.HTML(
B.HEAD(
B.TITLE('Readability Test Summary'),
B.STYLE(SUMMARY_CSS, type = 'text/css')
),
B.BODY(
B.TABLE(
tbody
)
)
)
def write_summary(path, tests_w_results):
doc = make_summary_doc(tests_w_results)
with open(path, 'w') as f:
f.write(lxml.html.tostring(doc))
def add_css(doc):
style = B.STYLE(READABILITY_CSS, type = 'text/css')
head = B.HEAD(style, content = 'text/html; charset=utf-8')
doc.insert(0, head)
def convert_links(url_map, url, doc):
url_path = url_map[url]
url_dir = os.path.dirname(url_path)
logging.debug('converting links: url_dir: %s' % url_dir)
def link_repl_func(link):
if link in url_map:
link_path = url_map[link]
logging.debug('converting links: link_path: %s' % link_path)
new_link = os.path.relpath(link_path, url_dir)
logging.debug('converting links: new_link: %s' % new_link)
return urllib.quote(new_link)
else:
split_link = urlparse.urlsplit(link)
if split_link.scheme == '':
if split_link.path == '':
return link
elif split_link.path[0] == '/':
root_path = urlparse.urlsplit(url).netloc
link_path = os.path.join(root_path, split_link.path[1:])
new_link = os.path.relpath(link_path, url_dir)
return urllib.quote(new_link)
else:
new_link = os.path.join(url_dir, split_link.path)
return urllib.quote(new_link)
else:
return link
doc.rewrite_links(link_repl_func)
def write_output_html(url_map, url, html, path, should_add_css):
doc = lxml.html.document_fromstring(html)
if should_add_css:
add_css(doc)
convert_links(url_map, url, doc)
html = lxml.html.tostring(doc)
with open(path, 'w') as f:
f.write(html)
def write_result(output_dir_path, result):
test_name = result.test_data.test.name
# Copy the base_path to output_base_path so that the result has access to
# any images it needs to display properly. This will also copy the
# original page and benchmark readability result.
base_path = os.path.join(TEST_DATA_PATH, test_name)
output_base_path = os.path.join(TEST_OUTPUT_PATH, test_name)
shutil.rmtree(output_base_path, ignore_errors = True)
shutil.copytree(base_path, output_base_path)
# Write pretty versions of the benchmark, result, and diffs into the
# output. Note that this will overwrite the original and benchmark that we
# copied over.
specs = [
(result.test_data.orig_html, '', False),
(result.test_data.rdbl_html, READABLE_SUFFIX, True),
(result.result_html, RESULT_SUFFIX, True),
(result.diff_html, DIFF_SUFFIX, True)
]
for (html, suffix, add_css) in specs:
url = result.test_data.test.url
url_map = result.test_data.test.url_map
url_path = url_map[url]
path = os.path.join(output_dir_path, test_name, url_path) + suffix
write_output_html(url_map, url, html, path, add_css)
def print_test_info(test):
name_string = '%s' % test.name
if test.enabled:
skipped = ''
else:
skipped = ' (SKIPPED)'
print('%20s: %s%s' % (name_string, test.desc, skipped))
def run_readability_tests(cases):
files = os.listdir(TEST_DATA_PATH)
tests = load_readability_tests(TEST_DATA_PATH, files, cases)
test_datas = [load_test_data(t) for t in tests]
results = [execute_test(t) for t in test_datas]
for (test, result) in zip(tests, results):
print_test_info(test)
if result:
write_result(TEST_OUTPUT_PATH, result)
write_summary(TEST_SUMMARY_PATH, zip(tests, results))
DESCRIPTION = 'Run the readability regression test suite.'
def main():
parser = argparse.ArgumentParser(description = DESCRIPTION)
parser.add_argument(
'--debug',
action = 'store_const',
const = True,
default = False,
help = 'enable debug logging'
)
parser.add_argument(
'--case',
action = 'append',
help = 'a test case to run'
)
args = parser.parse_args()
level = logging.DEBUG if args.debug else logging.INFO
logging.basicConfig(level = level)
run_readability_tests(args.case)
if __name__ == '__main__':
main()