- c - 在位数组中找到第一个零
- linux - Unix 显示有关匹配两种模式之一的文件的信息
- 正则表达式替换多个文件
- linux - 隐藏来自 xtrace 的命令
作为发布过程的一部分,我必须比较我的应用程序使用的一些 JSON 配置数据。作为第一次尝试,我只是漂亮地打印了 JSON 并比较了它们(使用 kdiff3 或只是 diff)。
然而,随着数据的增长,kdiff3 混淆了输出中的不同部分,使添加看起来像巨大的修改、奇怪的删除等。这让人很难找出不同之处。我也尝试过其他 diff 工具(meld、kompare、diff 等),但它们都有同样的问题。
尽管我尽了最大努力,但我似乎无法以 diff 工具可以理解的方式格式化 JSON。
示例数据:
[
{
"name": "date",
"type": "date",
"nullable": true,
"state": "enabled"
},
{
"name": "owner",
"type": "string",
"nullable": false,
"state": "enabled",
}
...lots more...
]
以上可能不会导致问题(当开始有数百行时会出现问题),但这就是比较内容的要点。
这只是一个示例;完整的对象是4-5个属性,有些属性中有4-5个属性。属性名称非常统一,但它们的值却千差万别。
一般来说,似乎所有的 diff 工具都会混淆结束“}”和下一个对象结束“}”。我似乎无法改掉他们的这个习惯。
我已经尝试添加空格、更改缩进,并在相应对象前后添加一些“BEGIN”和“END”字符串,但该工具仍然感到困惑。
最佳答案
如果您的任何工具有此选项,Patience Diff可以为你工作得更好。我会尝试用它找到一个工具(除 Git 和 Bazaar 之外)并报告回来。
编辑:似乎the implementation in Bazaar可作为独立工具使用,只需进行最少的更改。
Edit2:WTH,为什么不粘贴你让我 hack 的新的酷 diff 脚本的源代码?在这里,我这边没有版权声明,只是重新安排了 Bram/Canonical 的代码。
#!/usr/bin/env python
# Copyright (C) 2005, 2006, 2007 Canonical Ltd
# Copyright (C) 2005 Bram Cohen, Copyright (C) 2005, 2006 Canonical Ltd
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
import os
import sys
import time
import difflib
from bisect import bisect
__all__ = ['PatienceSequenceMatcher', 'unified_diff', 'unified_diff_files']
py3k = False
try:
xrange
except NameError:
py3k = True
xrange = range
# This is a version of unified_diff which only adds a factory parameter
# so that you can override the default SequenceMatcher
# this has been submitted as a patch to python
def unified_diff(a, b, fromfile='', tofile='', fromfiledate='',
tofiledate='', n=3, lineterm='\n',
sequencematcher=None):
r"""
Compare two sequences of lines; generate the delta as a unified diff.
Unified diffs are a compact way of showing line changes and a few
lines of context. The number of context lines is set by 'n' which
defaults to three.
By default, the diff control lines (those with ---, +++, or @@) are
created with a trailing newline. This is helpful so that inputs
created from file.readlines() result in diffs that are suitable for
file.writelines() since both the inputs and outputs have trailing
newlines.
For inputs that do not have trailing newlines, set the lineterm
argument to "" so that the output will be uniformly newline free.
The unidiff format normally has a header for filenames and modification
times. Any or all of these may be specified using strings for
'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. The modification
times are normally expressed in the format returned by time.ctime().
Example:
>>> for line in unified_diff('one two three four'.split(),
... 'zero one tree four'.split(), 'Original', 'Current',
... 'Sat Jan 26 23:30:50 1991', 'Fri Jun 06 10:20:52 2003',
... lineterm=''):
... print line
--- Original Sat Jan 26 23:30:50 1991
+++ Current Fri Jun 06 10:20:52 2003
@@ -1,4 +1,4 @@
+zero
one
-two
-three
+tree
four
"""
if sequencematcher is None:
import difflib
sequencematcher = difflib.SequenceMatcher
if fromfiledate:
fromfiledate = '\t' + str(fromfiledate)
if tofiledate:
tofiledate = '\t' + str(tofiledate)
started = False
for group in sequencematcher(None,a,b).get_grouped_opcodes(n):
if not started:
yield '--- %s%s%s' % (fromfile, fromfiledate, lineterm)
yield '+++ %s%s%s' % (tofile, tofiledate, lineterm)
started = True
i1, i2, j1, j2 = group[0][3], group[-1][4], group[0][5], group[-1][6]
yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm)
for tag, i1, i2, j1, j2 in group:
if tag == 'equal':
for line in a[i1:i2]:
yield ' ' + line
continue
if tag == 'replace' or tag == 'delete':
for line in a[i1:i2]:
yield '-' + line
if tag == 'replace' or tag == 'insert':
for line in b[j1:j2]:
yield '+' + line
def unified_diff_files(a, b, sequencematcher=None):
"""Generate the diff for two files.
"""
mode = 'rb'
if py3k: mode = 'r'
# Should this actually be an error?
if a == b:
return []
if a == '-':
file_a = sys.stdin
time_a = time.time()
else:
file_a = open(a, mode)
time_a = os.stat(a).st_mtime
if b == '-':
file_b = sys.stdin
time_b = time.time()
else:
file_b = open(b, mode)
time_b = os.stat(b).st_mtime
# TODO: Include fromfiledate and tofiledate
return unified_diff(file_a.readlines(), file_b.readlines(),
fromfile=a, tofile=b,
sequencematcher=sequencematcher)
def unique_lcs_py(a, b):
"""Find the longest common subset for unique lines.
:param a: An indexable object (such as string or list of strings)
:param b: Another indexable object (such as string or list of strings)
:return: A list of tuples, one for each line which is matched.
[(line_in_a, line_in_b), ...]
This only matches lines which are unique on both sides.
This helps prevent common lines from over influencing match
results.
The longest common subset uses the Patience Sorting algorithm:
http://en.wikipedia.org/wiki/Patience_sorting
"""
# set index[line in a] = position of line in a unless
# a is a duplicate, in which case it's set to None
index = {}
for i in xrange(len(a)):
line = a[i]
if line in index:
index[line] = None
else:
index[line]= i
# make btoa[i] = position of line i in a, unless
# that line doesn't occur exactly once in both,
# in which case it's set to None
btoa = [None] * len(b)
index2 = {}
for pos, line in enumerate(b):
next = index.get(line)
if next is not None:
if line in index2:
# unset the previous mapping, which we now know to
# be invalid because the line isn't unique
btoa[index2[line]] = None
del index[line]
else:
index2[line] = pos
btoa[pos] = next
# this is the Patience sorting algorithm
# see http://en.wikipedia.org/wiki/Patience_sorting
backpointers = [None] * len(b)
stacks = []
lasts = []
k = 0
for bpos, apos in enumerate(btoa):
if apos is None:
continue
# as an optimization, check if the next line comes at the end,
# because it usually does
if stacks and stacks[-1] < apos:
k = len(stacks)
# as an optimization, check if the next line comes right after
# the previous line, because usually it does
elif stacks and stacks[k] < apos and (k == len(stacks) - 1 or
stacks[k+1] > apos):
k += 1
else:
k = bisect(stacks, apos)
if k > 0:
backpointers[bpos] = lasts[k-1]
if k < len(stacks):
stacks[k] = apos
lasts[k] = bpos
else:
stacks.append(apos)
lasts.append(bpos)
if len(lasts) == 0:
return []
result = []
k = lasts[-1]
while k is not None:
result.append((btoa[k], k))
k = backpointers[k]
result.reverse()
return result
def recurse_matches_py(a, b, alo, blo, ahi, bhi, answer, maxrecursion):
"""Find all of the matching text in the lines of a and b.
:param a: A sequence
:param b: Another sequence
:param alo: The start location of a to check, typically 0
:param ahi: The start location of b to check, typically 0
:param ahi: The maximum length of a to check, typically len(a)
:param bhi: The maximum length of b to check, typically len(b)
:param answer: The return array. Will be filled with tuples
indicating [(line_in_a, line_in_b)]
:param maxrecursion: The maximum depth to recurse.
Must be a positive integer.
:return: None, the return value is in the parameter answer, which
should be a list
"""
if maxrecursion < 0:
print('max recursion depth reached')
# this will never happen normally, this check is to prevent DOS attacks
return
oldlength = len(answer)
if alo == ahi or blo == bhi:
return
last_a_pos = alo-1
last_b_pos = blo-1
for apos, bpos in unique_lcs_py(a[alo:ahi], b[blo:bhi]):
# recurse between lines which are unique in each file and match
apos += alo
bpos += blo
# Most of the time, you will have a sequence of similar entries
if last_a_pos+1 != apos or last_b_pos+1 != bpos:
recurse_matches_py(a, b, last_a_pos+1, last_b_pos+1,
apos, bpos, answer, maxrecursion - 1)
last_a_pos = apos
last_b_pos = bpos
answer.append((apos, bpos))
if len(answer) > oldlength:
# find matches between the last match and the end
recurse_matches_py(a, b, last_a_pos+1, last_b_pos+1,
ahi, bhi, answer, maxrecursion - 1)
elif a[alo] == b[blo]:
# find matching lines at the very beginning
while alo < ahi and blo < bhi and a[alo] == b[blo]:
answer.append((alo, blo))
alo += 1
blo += 1
recurse_matches_py(a, b, alo, blo,
ahi, bhi, answer, maxrecursion - 1)
elif a[ahi - 1] == b[bhi - 1]:
# find matching lines at the very end
nahi = ahi - 1
nbhi = bhi - 1
while nahi > alo and nbhi > blo and a[nahi - 1] == b[nbhi - 1]:
nahi -= 1
nbhi -= 1
recurse_matches_py(a, b, last_a_pos+1, last_b_pos+1,
nahi, nbhi, answer, maxrecursion - 1)
for i in xrange(ahi - nahi):
answer.append((nahi + i, nbhi + i))
def _collapse_sequences(matches):
"""Find sequences of lines.
Given a sequence of [(line_in_a, line_in_b),]
find regions where they both increment at the same time
"""
answer = []
start_a = start_b = None
length = 0
for i_a, i_b in matches:
if (start_a is not None
and (i_a == start_a + length)
and (i_b == start_b + length)):
length += 1
else:
if start_a is not None:
answer.append((start_a, start_b, length))
start_a = i_a
start_b = i_b
length = 1
if length != 0:
answer.append((start_a, start_b, length))
return answer
def _check_consistency(answer):
# For consistency sake, make sure all matches are only increasing
next_a = -1
next_b = -1
for (a, b, match_len) in answer:
if a < next_a:
raise ValueError('Non increasing matches for a')
if b < next_b:
raise ValueError('Non increasing matches for b')
next_a = a + match_len
next_b = b + match_len
class PatienceSequenceMatcher_py(difflib.SequenceMatcher):
"""Compare a pair of sequences using longest common subset."""
_do_check_consistency = True
def __init__(self, isjunk=None, a='', b=''):
if isjunk is not None:
raise NotImplementedError('Currently we do not support'
' isjunk for sequence matching')
difflib.SequenceMatcher.__init__(self, isjunk, a, b)
def get_matching_blocks(self):
"""Return list of triples describing matching subsequences.
Each triple is of the form (i, j, n), and means that
a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in
i and in j.
The last triple is a dummy, (len(a), len(b), 0), and is the only
triple with n==0.
>>> s = PatienceSequenceMatcher(None, "abxcd", "abcd")
>>> s.get_matching_blocks()
[(0, 0, 2), (3, 2, 2), (5, 4, 0)]
"""
# jam 20060525 This is the python 2.4.1 difflib get_matching_blocks
# implementation which uses __helper. 2.4.3 got rid of helper for
# doing it inline with a queue.
# We should consider doing the same for recurse_matches
if self.matching_blocks is not None:
return self.matching_blocks
matches = []
recurse_matches_py(self.a, self.b, 0, 0,
len(self.a), len(self.b), matches, 10)
# Matches now has individual line pairs of
# line A matches line B, at the given offsets
self.matching_blocks = _collapse_sequences(matches)
self.matching_blocks.append( (len(self.a), len(self.b), 0) )
if PatienceSequenceMatcher_py._do_check_consistency:
if __debug__:
_check_consistency(self.matching_blocks)
return self.matching_blocks
unique_lcs = unique_lcs_py
recurse_matches = recurse_matches_py
PatienceSequenceMatcher = PatienceSequenceMatcher_py
def main(args):
import optparse
p = optparse.OptionParser(usage='%prog [options] file_a file_b'
'\nFiles can be "-" to read from stdin')
p.add_option('--patience', dest='matcher', action='store_const', const='patience',
default='patience', help='Use the patience difference algorithm')
p.add_option('--difflib', dest='matcher', action='store_const', const='difflib',
default='patience', help='Use python\'s difflib algorithm')
algorithms = {'patience':PatienceSequenceMatcher, 'difflib':difflib.SequenceMatcher}
(opts, args) = p.parse_args(args)
matcher = algorithms[opts.matcher]
if len(args) != 2:
print('You must supply 2 filenames to diff')
return -1
for line in unified_diff_files(args[0], args[1], sequencematcher=matcher):
sys.stdout.write(line)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
编辑 3:我还制作了一个 minimally standalone version的 Neil Fraser的 Diff Match and Patch ,我对比较您的用例的结果非常感兴趣。同样,我不主张任何版权。
编辑 4:我刚找到 DataDiff ,这可能是另一个可以尝试的工具。
DataDiff is a library to provide human-readable diffs of python data structures. It can handle sequence types (lists, tuples, etc), sets, and dictionaries.
Dictionaries and sequences will be diffed recursively, when applicable.
关于python - 文本差异 JSON,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4599456/
表架构 DROP TABLE bla; CREATE TABLE bla (id INTEGER, city INTEGER, year_ INTEGER, month_ INTEGER, val I
我需要拆分字符串/或从具有以下结构的字符串中获取更容易的子字符串。 字符串将来自 window.location.pathname 或 window.location.href,看起来像 text/n
每当将对象添加到数组中时,我都会尝试更新 TextView ,并在 TextView 中显示该文本,如下所示: "object 1" "object 2" 问题是,每次将新对象添加到数组时,它都会覆盖
我目前正在寻找使用 Java 读取网站可见文本并将其存储为纯文本字符串的方法。 换句话说,我想转换成这样: Hello stupid World进入“ Hello World ” 或者类似的东西 Un
我正在尝试以文本和 HTML 格式发送电子邮件,但无法正确发送正确的 header 。特别是,我想设置 Content-Type header ,但我找不到如何为 html 和文本部分单独设置它。 这
我尝试了上面的代码,但我无法绑定(bind)文本,我怎样才能将资源内部文本 bloc
我刚刚完成了 Space Shooter 教程,由于没有 GUIText 对象,所以我创建了 UI.Text 对象并进行了相应的编码。它在统一播放器中有效,但在构建 Web 应用程序后无效。我花了一段
我有这个代码: - (IBAction)setButtonPressed:(id)sender { NSUserDefaults *sharedDefaults = [[NSUserDefau
抱歉标题含糊不清,但我想不出我想在标题中做什么。无论如何,对于图像上的文本,我使用了 JLabel 文本并将其添加到图标中。 JLabel icon = new JLabel(new Imag
关闭。这个问题是not reproducible or was caused by typos .它目前不接受答案。 这个问题是由于错别字或无法再重现的问题引起的。虽然类似的问题可能是on-topi
我在将 Twitter 嵌入到我从 HTML 5 转换的 wordpress 运行网站时遇到问题。 我遇到的问题是推文不是我的自定义字体... 这是我无法使用任何 css 定位的 HTML 代码,我正
我正在尝试找到解决由于使用以下形式的代码而导致的冗余字符串连接问题的最佳方法: logger.debug("Entering loop, arg is: " + arg) // @1 在大多数情况下,
我写了这个测试 @Test public void removeRequestTextFromRouteError() throws Exception { String input = "F
我目前正在创建一个正则表达式来拆分所有匹配以下格式的字符串:&[文本],并且需要获取文本。字符串可能类似于:something &[text] &[text] everything &[text] 等
有没有办法将标题文本从一个词变形为另一个词,同时保留两个词中使用的字母?我看过的许多 css 文本动画大多是视觉的,很少有旋转整个单词的。 我想要做的是从一个词过渡,例如“BEACH”到“CHANGE
总结matplotlib绘图如何设置坐标轴刻度大小和刻度。 上代码: ?
我在容器 (1) 中创建了容器 (2)。你能帮忙如何向容器(1)添加文本吗?下面是我的代码 return Scaffold( body: Padding( padding: c
我似乎找不到任何人或任何人这样做过。我试图限制我们使用的图像数量,并想创建一个带有渐变作为其“颜色”的文本,并在其周围设置渐变轮廓/描边 到目前为止,我还没有看到任何将两者结合在一起的东西。 我可以自
我正在为视频游戏暗黑破坏神 2 使用 discord.py 构建一个不和谐机器人。其中一项功能要求机器人从暗黑破坏神 2 屏幕截图中提取项目的名称和属性。我目前正在为此使用 pytesseract,但
我很难弄清楚如何旋转 strip.text theme 中的属性来自 ggplot2 .我使用的是 R 版本 3.4.2 和 ggplot2 版本 2.2.1。 以下是 MWE 的数据。 > dput
我是一名优秀的程序员,十分优秀!