gpt4 book ai didi

edu.stanford.nlp.process.WordToSentenceProcessor类的使用及代码示例

转载 作者:知者 更新时间:2024-03-24 00:45:05 25 4
gpt4 key购买 nike

本文整理了Java中edu.stanford.nlp.process.WordToSentenceProcessor类的一些代码示例,展示了WordToSentenceProcessor类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。WordToSentenceProcessor类的具体详情如下:
包路径:edu.stanford.nlp.process.WordToSentenceProcessor
类名称:WordToSentenceProcessor

WordToSentenceProcessor介绍

[英]Transforms a List of words into a List of Lists of words (that is, a List of sentences), by grouping the words. The word stream is assumed to already be adequately tokenized, and this class just divides the List into sentences, perhaps discarding some separator tokens as it goes.

The main behavior is to look for sentence ending tokens like "." or "?!?", and to split after them and any following sentence closers like ")". Overlaid on this is an overall choice of state: The WordToSentenceProcessor can be a non-splitter, which always returns one sentence. Otherwise, the WordToSentenceProcessor will also split based on paragraphs using one of these three states: (1) Ignore line breaks in splitting sentences, (2) Treat each line as a separate paragraph, or (3) Treat two consecutive line breaks as marking the end of a paragraph. The details of sentence breaking within paragraphs is controlled based on the following three variables:

  • sentenceBoundaryTokens are tokens that are left in a sentence, but are to be regarded as ending a sentence. A canonical example is a period. If two of these follow each other, the second will be a sentence consisting of only the sentenceBoundaryToken.
  • sentenceBoundaryFollowers are tokens that are left in a sentence, and which can follow a sentenceBoundaryToken while still belonging to the previous sentence. They cannot begin a sentence (except at the beginning of a document). A canonical example is a close parenthesis ')'.
  • sentenceBoundaryToDiscard are tokens which separate sentences and which should be thrown away. In web documents, a typical example would be a '

' tag. If two of these follow each other, they are coalesced: no empty Sentence is output. The end-of-file is not represented in this Set, but the code behaves as if it were a member.

  • regionElementRegex A regular expression for element names containing a sentence region. Only tokens in such elements will be included in sentences. The start and end tags themselves are not included in the sentence.

Instances of this class are now immutable. ☺
[中]通过对单词进行分组,将单词列表转换为单词列表(即句子列表)。假设单词流已经被充分标记化,这个类只是将列表分成几个句子,可能会在执行时丢弃一些分隔符标记。
主要的行为是寻找像“.”这样的句子结尾标记或者“!?”,并在他们和下面的任何句子结束后分开,比如“)”。覆盖在上面的是状态的总体选择:WordToSentenceProcessor可以是一个非拆分器,它总是返回一个句子。否则,WordToSentenceProcessor也会使用以下三种状态之一基于段落进行拆分:(1)在拆分句子时忽略换行符,(2)将每一行视为单独的段落,或(3)将两个连续的换行符视为标记段落结尾。段落内的断句细节基于以下三个变量进行控制:
*句子边界标记是留在句子中,但被视为句子结尾的标记。典型的例子是句号。如果其中两个后面紧跟着,那么第二个句子将是一个只包含sentenceBoundaryToken的句子。
*sentenceBoundaryFollowers是留在句子中的标记,可以跟随sentenceBoundaryToken,同时仍然属于前一个句子。他们不能开始一个句子(除非在文件开头)。一个典型的例子是右括号“')。
*句子边界todiscard是分开句子的标记,应该扔掉。在web文档中,一个典型的例子是
“标签。如果这两个句子互相跟随,它们就会结合在一起:不会输出空句子。文件结尾不在这个集合中表示,但代码的行为就像它是一个成员一样。
*regionElementRegex包含句子区域的元素名称的正则表达式。只有这些元素中的标记才会包含在句子中。句子中不包括开始和结束标记本身。
这个类的实例现在是不可变的。☺

代码示例

代码示例来源:origin: stanfordnlp/CoreNLP

wts = new WordToSentenceProcessor<>();
List<List<IN>> sentences = wts.process(document);
List<IN> newDocument = new ArrayList<>();
for (List<IN> sentence : sentences) {

代码示例来源:origin: stanfordnlp/CoreNLP

/**
  * For internal debugging purposes only.
  */
 public static void main(String[] args) {
  new BasicDocument<String>();
  Document<String, Word, Word> htmlDoc = BasicDocument.init("top text <h1>HEADING text</h1> this is <p>new paragraph<br>next line<br/>xhtml break etc.");
  System.out.println("Before:");
  System.out.println(htmlDoc);
  Document<String, Word, Word> txtDoc = new StripTagsProcessor<String, Word>(true).processDocument(htmlDoc);
  System.out.println("After:");
  System.out.println(txtDoc);
  Document<String, Word, List<Word>> sentences = new WordToSentenceProcessor<Word>().processDocument(txtDoc);
  System.out.println("Sentences:");
  System.out.println(sentences);
 }
}

代码示例来源:origin: stanfordnlp/CoreNLP

/**
 * Returns a List of Lists where each element is built from a run
 * of Words in the input Document. Specifically, reads through each word in
 * the input document and breaks off a sentence after finding a valid
 * sentence boundary token or end of file.
 * Note that for this to work, the words in the
 * input document must have been tokenized with a tokenizer that makes
 * sentence boundary tokens their own tokens (e.g., {@link PTBTokenizer}).
 *
 * @param words A list of already tokenized words (must implement HasWord or be a String).
 * @return A list of sentences.
 * @see #WordToSentenceProcessor(String, String, Set, Set, String, NewlineIsSentenceBreak, SequencePattern, Set, boolean, boolean)
 */
// todo [cdm 2016]: Should really sort out generics here so don't need to have extra list copying
@Override
public List<List<IN>> process(List<? extends IN> words) {
 if (isOneSentence) {
  // put all the words in one sentence
  List<List<IN>> sentences = Generics.newArrayList();
  sentences.add(new ArrayList<>(words));
  return sentences;
 } else {
  return wordsToSentences(words);
 }
}

代码示例来源:origin: stanfordnlp/CoreNLP

public WordsToSentencesAnnotator(boolean verbose, String boundaryTokenRegex,
                 Set<String> boundaryToDiscard, Set<String> htmlElementsToDiscard,
                 String newlineIsSentenceBreak, String boundaryMultiTokenRegex,
                 Set<String> tokenRegexesToDiscard) {
 this(verbose, false,
     new WordToSentenceProcessor<>(boundaryTokenRegex, null,
         boundaryToDiscard, htmlElementsToDiscard,
         WordToSentenceProcessor.stringToNewlineIsSentenceBreak(newlineIsSentenceBreak),
         (boundaryMultiTokenRegex != null) ? TokenSequencePattern.compile(boundaryMultiTokenRegex) : null, tokenRegexesToDiscard));
}

代码示例来源:origin: stanfordnlp/CoreNLP

String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
 discardToken = matchesTokenPatternsToDiscard(word);
 lastTokenWasNewline = false;
 Boolean isb;
 if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
  newSentForced = true;
  if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }

代码示例来源:origin: stanfordnlp/CoreNLP

public <L, F> Document<L, F, List<IN>> processDocument(Document<L, F, IN> in) {
 Document<L, F, List<IN>> doc = in.blankDocument();
 doc.addAll(process(in));
 return doc;
}

代码示例来源:origin: stanfordnlp/CoreNLP

/** Return a WordsToSentencesAnnotator that never splits the token stream. You just get one sentence.
 *
 *  @return A WordsToSentenceAnnotator.
 */
public static WordsToSentencesAnnotator nonSplitter() {
 WordToSentenceProcessor<CoreLabel> wts = new WordToSentenceProcessor<>(true);
 return new WordsToSentencesAnnotator(false, false, wts);
}

代码示例来源:origin: stanfordnlp/CoreNLP

private boolean matchesXmlBreakElementToDiscard(String word) {
 return matches(xmlBreakElementsToDiscard, word);
}

代码示例来源:origin: edu.stanford.nlp/stanford-corenlp

String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
 discardToken = matchesTokenPatternsToDiscard(word);
 lastTokenWasNewline = false;
 Boolean isb;
 if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
  newSentForced = true;
  if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }

代码示例来源:origin: stanfordnlp/CoreNLP

List<List<IN>> sentences = wts.process(words);
String after = "";
IN last = null;

代码示例来源:origin: stanfordnlp/CoreNLP

new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{"\n"}));
  this.countLineNumbers = true;
  this.wts = wts1;
      new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{System.lineSeparator(), "\n"}));
  this.countLineNumbers = true;
  this.wts = wts1;
     new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{PTBTokenizer.getNewlineToken()}));
 this.countLineNumbers = true;
 this.wts = wts1;
if (Boolean.parseBoolean(isOneSentence)) { // this method treats null as false
 WordToSentenceProcessor<CoreLabel> wts1 = new WordToSentenceProcessor<>(true);
 this.countLineNumbers = false;
 this.wts = wts1;
 this.wts = new WordToSentenceProcessor<>(boundaryTokenRegex, boundaryFollowersRegex,
   boundariesToDiscard, htmlElementsToDiscard,
   WordToSentenceProcessor.stringToNewlineIsSentenceBreak(nlsb),
   (boundaryMultiTokenRegex != null) ? TokenSequencePattern.compile(boundaryMultiTokenRegex) : null, tokenRegexesToDiscard);

代码示例来源:origin: stanfordnlp/CoreNLP

/** Return a WordsToSentencesAnnotator that splits on newlines (only), which are then deleted.
 *  This constructor counts the lines by putting in empty token lists for empty lines.
 *  It tells the underlying splitter to return empty lists of tokens
 *  and then treats those empty lists as empty lines.  We don't
 *  actually include empty sentences in the annotation, though. But they
 *  are used in numbering the sentence. Only this constructor leads to
 *  empty sentences.
 *
 *  @param  nlToken Zero or more new line tokens, which might be a {@literal \n} or the fake
 *                 newline tokens returned from the tokenizer.
 *  @return A WordsToSentenceAnnotator.
 */
public static WordsToSentencesAnnotator newlineSplitter(String... nlToken) {
 // this constructor will keep empty lines as empty sentences
 WordToSentenceProcessor<CoreLabel> wts =
     new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(nlToken));
 return new WordsToSentencesAnnotator(false, true, wts);
}

代码示例来源:origin: stanfordnlp/CoreNLP

private boolean matchesTokenPatternsToDiscard(String word) {
 return matches(tokenPatternsToDiscard, word);
}

代码示例来源:origin: stanfordnlp/CoreNLP

public static void addEnhancedSentences(Annotation doc) {
 //for every sentence that begins a paragraph: append this sentence and the previous one and see if sentence splitter would make a single sentence out of it. If so, add as extra sentence.
 //for each sieve that potentially uses augmentedSentences in original:
 List<CoreMap> sentences = doc.get(CoreAnnotations.SentencesAnnotation.class);
 WordToSentenceProcessor wsp =
     new WordToSentenceProcessor(WordToSentenceProcessor.NewlineIsSentenceBreak.NEVER); //create SentenceSplitter that never splits on newline
 int prevParagraph = 0;
 for(int i = 1; i < sentences.size(); i++) {
  CoreMap sentence = sentences.get(i);
  CoreMap prevSentence = sentences.get(i-1);
  List<CoreLabel> tokensConcat = new ArrayList<>();
  tokensConcat.addAll(prevSentence.get(CoreAnnotations.TokensAnnotation.class));
  tokensConcat.addAll(sentence.get(CoreAnnotations.TokensAnnotation.class));
  List<List<CoreLabel>> sentenceTokens = wsp.process(tokensConcat);
  if(sentenceTokens.size() == 1) { //wsp would have put them into a single sentence --> add enhanced sentence.
   sentence.set(EnhancedSentenceAnnotation.class, constructSentence(sentenceTokens.get(0), prevSentence, sentence));
  }
 }
}

代码示例来源:origin: edu.stanford.nlp/stanford-parser

String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
 discardToken = matchesTokenPatternsToDiscard(word);
 lastTokenWasNewline = false;
 Boolean isb;
 if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
  newSentForced = true;
  if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }

代码示例来源:origin: stanfordnlp/CoreNLP

for (List<CoreLabel> sentenceTokens: wts.process(tokens)) {
 if (countLineNumbers) {
  ++lineNumber;

代码示例来源:origin: edu.stanford.nlp/corenlp

/**
  * For internal debugging purposes only.
  */
 public static void main(String[] args) {
  new BasicDocument<String>();
  Document<String, Word, Word> htmlDoc = BasicDocument.init("top text <h1>HEADING text</h1> this is <p>new paragraph<br>next line<br/>xhtml break etc.");
  System.out.println("Before:");
  System.out.println(htmlDoc);
  Document<String, Word, Word> txtDoc = new StripTagsProcessor<String, Word>(true).processDocument(htmlDoc);
  System.out.println("After:");
  System.out.println(txtDoc);
  Document<String, Word, List<Word>> sentences = new WordToSentenceProcessor<Word>().processDocument(txtDoc);
  System.out.println("Sentences:");
  System.out.println(sentences);
 }
}

代码示例来源:origin: com.guokr/stan-cn-com

public WordsToSentencesAnnotator(boolean verbose, String boundaryTokenRegex,
                 Set<String> boundaryToDiscard, Set<String> htmlElementsToDiscard,
                 String newlineIsSentenceBreak) {
 this(verbose, false,
    new WordToSentenceProcessor<CoreLabel>(boundaryTokenRegex,
        boundaryToDiscard, htmlElementsToDiscard,
        WordToSentenceProcessor.stringToNewlineIsSentenceBreak(newlineIsSentenceBreak)));
}

代码示例来源:origin: com.guokr/stan-cn-com

public WordsToSentencesAnnotator(boolean verbose) {
 this(verbose, false, new WordToSentenceProcessor<CoreLabel>());
}

代码示例来源:origin: edu.stanford.nlp/corenlp

public List<List<IN>> process(List<? extends IN> words) {
 if (isOneSentence) {
  List<List<IN>> sentences = Generics.newArrayList();
  sentences.add(new ArrayList<IN>(words));
  return sentences;
 } else {
  return wordsToSentences(words);
 }
}

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com