- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中edu.stanford.nlp.process.WordToSentenceProcessor
类的一些代码示例,展示了WordToSentenceProcessor
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。WordToSentenceProcessor
类的具体详情如下:
包路径:edu.stanford.nlp.process.WordToSentenceProcessor
类名称:WordToSentenceProcessor
[英]Transforms a List of words into a List of Lists of words (that is, a List of sentences), by grouping the words. The word stream is assumed to already be adequately tokenized, and this class just divides the List into sentences, perhaps discarding some separator tokens as it goes.
The main behavior is to look for sentence ending tokens like "." or "?!?", and to split after them and any following sentence closers like ")". Overlaid on this is an overall choice of state: The WordToSentenceProcessor can be a non-splitter, which always returns one sentence. Otherwise, the WordToSentenceProcessor will also split based on paragraphs using one of these three states: (1) Ignore line breaks in splitting sentences, (2) Treat each line as a separate paragraph, or (3) Treat two consecutive line breaks as marking the end of a paragraph. The details of sentence breaking within paragraphs is controlled based on the following three variables:
' tag. If two of these follow each other, they are coalesced: no empty Sentence is output. The end-of-file is not represented in this Set, but the code behaves as if it were a member.
Instances of this class are now immutable. ☺
[中]通过对单词进行分组,将单词列表转换为单词列表(即句子列表)。假设单词流已经被充分标记化,这个类只是将列表分成几个句子,可能会在执行时丢弃一些分隔符标记。
主要的行为是寻找像“.”这样的句子结尾标记或者“!?”,并在他们和下面的任何句子结束后分开,比如“)”。覆盖在上面的是状态的总体选择:WordToSentenceProcessor可以是一个非拆分器,它总是返回一个句子。否则,WordToSentenceProcessor也会使用以下三种状态之一基于段落进行拆分:(1)在拆分句子时忽略换行符,(2)将每一行视为单独的段落,或(3)将两个连续的换行符视为标记段落结尾。段落内的断句细节基于以下三个变量进行控制:
*句子边界标记是留在句子中,但被视为句子结尾的标记。典型的例子是句号。如果其中两个后面紧跟着,那么第二个句子将是一个只包含sentenceBoundaryToken的句子。
*sentenceBoundaryFollowers是留在句子中的标记,可以跟随sentenceBoundaryToken,同时仍然属于前一个句子。他们不能开始一个句子(除非在文件开头)。一个典型的例子是右括号“')。
*句子边界todiscard是分开句子的标记,应该扔掉。在web文档中,一个典型的例子是
“标签。如果这两个句子互相跟随,它们就会结合在一起:不会输出空句子。文件结尾不在这个集合中表示,但代码的行为就像它是一个成员一样。
*regionElementRegex包含句子区域的元素名称的正则表达式。只有这些元素中的标记才会包含在句子中。句子中不包括开始和结束标记本身。
这个类的实例现在是不可变的。☺
代码示例来源:origin: stanfordnlp/CoreNLP
wts = new WordToSentenceProcessor<>();
List<List<IN>> sentences = wts.process(document);
List<IN> newDocument = new ArrayList<>();
for (List<IN> sentence : sentences) {
代码示例来源:origin: stanfordnlp/CoreNLP
/**
* For internal debugging purposes only.
*/
public static void main(String[] args) {
new BasicDocument<String>();
Document<String, Word, Word> htmlDoc = BasicDocument.init("top text <h1>HEADING text</h1> this is <p>new paragraph<br>next line<br/>xhtml break etc.");
System.out.println("Before:");
System.out.println(htmlDoc);
Document<String, Word, Word> txtDoc = new StripTagsProcessor<String, Word>(true).processDocument(htmlDoc);
System.out.println("After:");
System.out.println(txtDoc);
Document<String, Word, List<Word>> sentences = new WordToSentenceProcessor<Word>().processDocument(txtDoc);
System.out.println("Sentences:");
System.out.println(sentences);
}
}
代码示例来源:origin: stanfordnlp/CoreNLP
/**
* Returns a List of Lists where each element is built from a run
* of Words in the input Document. Specifically, reads through each word in
* the input document and breaks off a sentence after finding a valid
* sentence boundary token or end of file.
* Note that for this to work, the words in the
* input document must have been tokenized with a tokenizer that makes
* sentence boundary tokens their own tokens (e.g., {@link PTBTokenizer}).
*
* @param words A list of already tokenized words (must implement HasWord or be a String).
* @return A list of sentences.
* @see #WordToSentenceProcessor(String, String, Set, Set, String, NewlineIsSentenceBreak, SequencePattern, Set, boolean, boolean)
*/
// todo [cdm 2016]: Should really sort out generics here so don't need to have extra list copying
@Override
public List<List<IN>> process(List<? extends IN> words) {
if (isOneSentence) {
// put all the words in one sentence
List<List<IN>> sentences = Generics.newArrayList();
sentences.add(new ArrayList<>(words));
return sentences;
} else {
return wordsToSentences(words);
}
}
代码示例来源:origin: stanfordnlp/CoreNLP
public WordsToSentencesAnnotator(boolean verbose, String boundaryTokenRegex,
Set<String> boundaryToDiscard, Set<String> htmlElementsToDiscard,
String newlineIsSentenceBreak, String boundaryMultiTokenRegex,
Set<String> tokenRegexesToDiscard) {
this(verbose, false,
new WordToSentenceProcessor<>(boundaryTokenRegex, null,
boundaryToDiscard, htmlElementsToDiscard,
WordToSentenceProcessor.stringToNewlineIsSentenceBreak(newlineIsSentenceBreak),
(boundaryMultiTokenRegex != null) ? TokenSequencePattern.compile(boundaryMultiTokenRegex) : null, tokenRegexesToDiscard));
}
代码示例来源:origin: stanfordnlp/CoreNLP
String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
discardToken = matchesTokenPatternsToDiscard(word);
lastTokenWasNewline = false;
Boolean isb;
if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
newSentForced = true;
if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }
代码示例来源:origin: stanfordnlp/CoreNLP
public <L, F> Document<L, F, List<IN>> processDocument(Document<L, F, IN> in) {
Document<L, F, List<IN>> doc = in.blankDocument();
doc.addAll(process(in));
return doc;
}
代码示例来源:origin: stanfordnlp/CoreNLP
/** Return a WordsToSentencesAnnotator that never splits the token stream. You just get one sentence.
*
* @return A WordsToSentenceAnnotator.
*/
public static WordsToSentencesAnnotator nonSplitter() {
WordToSentenceProcessor<CoreLabel> wts = new WordToSentenceProcessor<>(true);
return new WordsToSentencesAnnotator(false, false, wts);
}
代码示例来源:origin: stanfordnlp/CoreNLP
private boolean matchesXmlBreakElementToDiscard(String word) {
return matches(xmlBreakElementsToDiscard, word);
}
代码示例来源:origin: edu.stanford.nlp/stanford-corenlp
String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
discardToken = matchesTokenPatternsToDiscard(word);
lastTokenWasNewline = false;
Boolean isb;
if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
newSentForced = true;
if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }
代码示例来源:origin: stanfordnlp/CoreNLP
List<List<IN>> sentences = wts.process(words);
String after = "";
IN last = null;
代码示例来源:origin: stanfordnlp/CoreNLP
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{"\n"}));
this.countLineNumbers = true;
this.wts = wts1;
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{System.lineSeparator(), "\n"}));
this.countLineNumbers = true;
this.wts = wts1;
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{PTBTokenizer.getNewlineToken()}));
this.countLineNumbers = true;
this.wts = wts1;
if (Boolean.parseBoolean(isOneSentence)) { // this method treats null as false
WordToSentenceProcessor<CoreLabel> wts1 = new WordToSentenceProcessor<>(true);
this.countLineNumbers = false;
this.wts = wts1;
this.wts = new WordToSentenceProcessor<>(boundaryTokenRegex, boundaryFollowersRegex,
boundariesToDiscard, htmlElementsToDiscard,
WordToSentenceProcessor.stringToNewlineIsSentenceBreak(nlsb),
(boundaryMultiTokenRegex != null) ? TokenSequencePattern.compile(boundaryMultiTokenRegex) : null, tokenRegexesToDiscard);
代码示例来源:origin: stanfordnlp/CoreNLP
/** Return a WordsToSentencesAnnotator that splits on newlines (only), which are then deleted.
* This constructor counts the lines by putting in empty token lists for empty lines.
* It tells the underlying splitter to return empty lists of tokens
* and then treats those empty lists as empty lines. We don't
* actually include empty sentences in the annotation, though. But they
* are used in numbering the sentence. Only this constructor leads to
* empty sentences.
*
* @param nlToken Zero or more new line tokens, which might be a {@literal \n} or the fake
* newline tokens returned from the tokenizer.
* @return A WordsToSentenceAnnotator.
*/
public static WordsToSentencesAnnotator newlineSplitter(String... nlToken) {
// this constructor will keep empty lines as empty sentences
WordToSentenceProcessor<CoreLabel> wts =
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(nlToken));
return new WordsToSentencesAnnotator(false, true, wts);
}
代码示例来源:origin: stanfordnlp/CoreNLP
private boolean matchesTokenPatternsToDiscard(String word) {
return matches(tokenPatternsToDiscard, word);
}
代码示例来源:origin: stanfordnlp/CoreNLP
public static void addEnhancedSentences(Annotation doc) {
//for every sentence that begins a paragraph: append this sentence and the previous one and see if sentence splitter would make a single sentence out of it. If so, add as extra sentence.
//for each sieve that potentially uses augmentedSentences in original:
List<CoreMap> sentences = doc.get(CoreAnnotations.SentencesAnnotation.class);
WordToSentenceProcessor wsp =
new WordToSentenceProcessor(WordToSentenceProcessor.NewlineIsSentenceBreak.NEVER); //create SentenceSplitter that never splits on newline
int prevParagraph = 0;
for(int i = 1; i < sentences.size(); i++) {
CoreMap sentence = sentences.get(i);
CoreMap prevSentence = sentences.get(i-1);
List<CoreLabel> tokensConcat = new ArrayList<>();
tokensConcat.addAll(prevSentence.get(CoreAnnotations.TokensAnnotation.class));
tokensConcat.addAll(sentence.get(CoreAnnotations.TokensAnnotation.class));
List<List<CoreLabel>> sentenceTokens = wsp.process(tokensConcat);
if(sentenceTokens.size() == 1) { //wsp would have put them into a single sentence --> add enhanced sentence.
sentence.set(EnhancedSentenceAnnotation.class, constructSentence(sentenceTokens.get(0), prevSentence, sentence));
}
}
}
代码示例来源:origin: edu.stanford.nlp/stanford-parser
String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
discardToken = matchesTokenPatternsToDiscard(word);
lastTokenWasNewline = false;
Boolean isb;
if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
newSentForced = true;
if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }
代码示例来源:origin: stanfordnlp/CoreNLP
for (List<CoreLabel> sentenceTokens: wts.process(tokens)) {
if (countLineNumbers) {
++lineNumber;
代码示例来源:origin: edu.stanford.nlp/corenlp
/**
* For internal debugging purposes only.
*/
public static void main(String[] args) {
new BasicDocument<String>();
Document<String, Word, Word> htmlDoc = BasicDocument.init("top text <h1>HEADING text</h1> this is <p>new paragraph<br>next line<br/>xhtml break etc.");
System.out.println("Before:");
System.out.println(htmlDoc);
Document<String, Word, Word> txtDoc = new StripTagsProcessor<String, Word>(true).processDocument(htmlDoc);
System.out.println("After:");
System.out.println(txtDoc);
Document<String, Word, List<Word>> sentences = new WordToSentenceProcessor<Word>().processDocument(txtDoc);
System.out.println("Sentences:");
System.out.println(sentences);
}
}
代码示例来源:origin: com.guokr/stan-cn-com
public WordsToSentencesAnnotator(boolean verbose, String boundaryTokenRegex,
Set<String> boundaryToDiscard, Set<String> htmlElementsToDiscard,
String newlineIsSentenceBreak) {
this(verbose, false,
new WordToSentenceProcessor<CoreLabel>(boundaryTokenRegex,
boundaryToDiscard, htmlElementsToDiscard,
WordToSentenceProcessor.stringToNewlineIsSentenceBreak(newlineIsSentenceBreak)));
}
代码示例来源:origin: com.guokr/stan-cn-com
public WordsToSentencesAnnotator(boolean verbose) {
this(verbose, false, new WordToSentenceProcessor<CoreLabel>());
}
代码示例来源:origin: edu.stanford.nlp/corenlp
public List<List<IN>> process(List<? extends IN> words) {
if (isOneSentence) {
List<List<IN>> sentences = Generics.newArrayList();
sentences.add(new ArrayList<IN>(words));
return sentences;
} else {
return wordsToSentences(words);
}
}
我在网上搜索但没有找到任何合适的文章解释如何使用 javascript 使用 WCF 服务,尤其是 WebScriptEndpoint。 任何人都可以对此给出任何指导吗? 谢谢 最佳答案 这是一篇关于
我正在编写一个将运行 Linux 命令的 C 程序,例如: cat/etc/passwd | grep 列表 |剪切-c 1-5 我没有任何结果 *这里 parent 等待第一个 child (chi
所以我正在尝试处理文件上传,然后将该文件作为二进制文件存储到数据库中。在我存储它之后,我尝试在给定的 URL 上提供文件。我似乎找不到适合这里的方法。我需要使用数据库,因为我使用 Google 应用引
我正在尝试制作一个宏,将下面的公式添加到单元格中,然后将其拖到整个列中并在 H 列中复制相同的公式 我想在 F 和 H 列中输入公式的数据 Range("F1").formula = "=IF(ISE
问题类似于this one ,但我想使用 OperatorPrecedenceParser 解析带有函数应用程序的表达式在 FParsec . 这是我的 AST: type Expression =
我想通过使用 sequelize 和 node.js 将这个查询更改为代码取决于在哪里 select COUNT(gender) as genderCount from customers where
我正在使用GNU bash,版本5.0.3(1)-发行版(x86_64-pc-linux-gnu),我想知道为什么简单的赋值语句会出现语法错误: #/bin/bash var1=/tmp
这里,为什么我的代码在 IE 中不起作用。我的代码适用于所有浏览器。没有问题。但是当我在 IE 上运行我的项目时,它发现错误。 而且我的 jquery 类和 insertadjacentHTMl 也不
我正在尝试更改标签的innerHTML。我无权访问该表单,因此无法编辑 HTML。标签具有的唯一标识符是“for”属性。 这是输入和标签的结构:
我有一个页面,我可以在其中返回用户帖子,可以使用一些 jquery 代码对这些帖子进行即时评论,在发布新评论后,我在帖子下插入新评论以及删除 按钮。问题是 Delete 按钮在新插入的元素上不起作用,
我有一个大约有 20 列的“管道分隔”文件。我只想使用 sha1sum 散列第一列,它是一个数字,如帐号,并按原样返回其余列。 使用 awk 或 sed 执行此操作的最佳方法是什么? Accounti
我需要将以下内容插入到我的表中...我的用户表有五列 id、用户名、密码、名称、条目。 (我还没有提交任何东西到条目中,我稍后会使用 php 来做)但由于某种原因我不断收到这个错误:#1054 - U
所以我试图有一个输入字段,我可以在其中输入任何字符,但然后将输入的值小写,删除任何非字母数字字符,留下“。”而不是空格。 例如,如果我输入: 地球的 70% 是水,-!*#$^^ & 30% 土地 输
我正在尝试做一些我认为非常简单的事情,但出于某种原因我没有得到想要的结果?我是 javascript 的新手,但对 java 有经验,所以我相信我没有使用某种正确的规则。 这是一个获取输入值、检查选择
我想使用 angularjs 从 mysql 数据库加载数据。 这就是应用程序的工作原理;用户登录,他们的用户名存储在 cookie 中。该用户名显示在主页上 我想获取这个值并通过 angularjs
我正在使用 autoLayout,我想在 UITableViewCell 上放置一个 UIlabel,它应该始终位于单元格的右侧和右侧的中心。 这就是我想要实现的目标 所以在这里你可以看到我正在谈论的
我需要与 MySql 等效的 elasticsearch 查询。我的 sql 查询: SELECT DISTINCT t.product_id AS id FROM tbl_sup_price t
我正在实现代码以使用 JSON。 func setup() { if let flickrURL = NSURL(string: "https://api.flickr.com/
我尝试使用for循环声明变量,然后测试cols和rols是否相同。如果是,它将运行递归函数。但是,我在 javascript 中执行 do 时遇到问题。有人可以帮忙吗? 现在,在比较 col.1 和
我举了一个我正在处理的问题的简短示例。 HTML代码: 1 2 3 CSS 代码: .BB a:hover{ color: #000; } .BB > li:after {
我是一名优秀的程序员,十分优秀!