- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中edu.stanford.nlp.process.WordToSentenceProcessor
类的一些代码示例,展示了WordToSentenceProcessor
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。WordToSentenceProcessor
类的具体详情如下:
包路径:edu.stanford.nlp.process.WordToSentenceProcessor
类名称:WordToSentenceProcessor
[英]Transforms a List of words into a List of Lists of words (that is, a List of sentences), by grouping the words. The word stream is assumed to already be adequately tokenized, and this class just divides the List into sentences, perhaps discarding some separator tokens as it goes.
The main behavior is to look for sentence ending tokens like "." or "?!?", and to split after them and any following sentence closers like ")". Overlaid on this is an overall choice of state: The WordToSentenceProcessor can be a non-splitter, which always returns one sentence. Otherwise, the WordToSentenceProcessor will also split based on paragraphs using one of these three states: (1) Ignore line breaks in splitting sentences, (2) Treat each line as a separate paragraph, or (3) Treat two consecutive line breaks as marking the end of a paragraph. The details of sentence breaking within paragraphs is controlled based on the following three variables:
' tag. If two of these follow each other, they are coalesced: no empty Sentence is output. The end-of-file is not represented in this Set, but the code behaves as if it were a member.
Instances of this class are now immutable. ☺
[中]通过对单词进行分组,将单词列表转换为单词列表(即句子列表)。假设单词流已经被充分标记化,这个类只是将列表分成几个句子,可能会在执行时丢弃一些分隔符标记。
主要的行为是寻找像“.”这样的句子结尾标记或者“!?”,并在他们和下面的任何句子结束后分开,比如“)”。覆盖在上面的是状态的总体选择:WordToSentenceProcessor可以是一个非拆分器,它总是返回一个句子。否则,WordToSentenceProcessor也会使用以下三种状态之一基于段落进行拆分:(1)在拆分句子时忽略换行符,(2)将每一行视为单独的段落,或(3)将两个连续的换行符视为标记段落结尾。段落内的断句细节基于以下三个变量进行控制:
*句子边界标记是留在句子中,但被视为句子结尾的标记。典型的例子是句号。如果其中两个后面紧跟着,那么第二个句子将是一个只包含sentenceBoundaryToken的句子。
*sentenceBoundaryFollowers是留在句子中的标记,可以跟随sentenceBoundaryToken,同时仍然属于前一个句子。他们不能开始一个句子(除非在文件开头)。一个典型的例子是右括号“')。
*句子边界todiscard是分开句子的标记,应该扔掉。在web文档中,一个典型的例子是
“标签。如果这两个句子互相跟随,它们就会结合在一起:不会输出空句子。文件结尾不在这个集合中表示,但代码的行为就像它是一个成员一样。
*regionElementRegex包含句子区域的元素名称的正则表达式。只有这些元素中的标记才会包含在句子中。句子中不包括开始和结束标记本身。
这个类的实例现在是不可变的。☺
代码示例来源:origin: stanfordnlp/CoreNLP
wts = new WordToSentenceProcessor<>();
List<List<IN>> sentences = wts.process(document);
List<IN> newDocument = new ArrayList<>();
for (List<IN> sentence : sentences) {
代码示例来源:origin: stanfordnlp/CoreNLP
/**
* For internal debugging purposes only.
*/
public static void main(String[] args) {
new BasicDocument<String>();
Document<String, Word, Word> htmlDoc = BasicDocument.init("top text <h1>HEADING text</h1> this is <p>new paragraph<br>next line<br/>xhtml break etc.");
System.out.println("Before:");
System.out.println(htmlDoc);
Document<String, Word, Word> txtDoc = new StripTagsProcessor<String, Word>(true).processDocument(htmlDoc);
System.out.println("After:");
System.out.println(txtDoc);
Document<String, Word, List<Word>> sentences = new WordToSentenceProcessor<Word>().processDocument(txtDoc);
System.out.println("Sentences:");
System.out.println(sentences);
}
}
代码示例来源:origin: stanfordnlp/CoreNLP
/**
* Returns a List of Lists where each element is built from a run
* of Words in the input Document. Specifically, reads through each word in
* the input document and breaks off a sentence after finding a valid
* sentence boundary token or end of file.
* Note that for this to work, the words in the
* input document must have been tokenized with a tokenizer that makes
* sentence boundary tokens their own tokens (e.g., {@link PTBTokenizer}).
*
* @param words A list of already tokenized words (must implement HasWord or be a String).
* @return A list of sentences.
* @see #WordToSentenceProcessor(String, String, Set, Set, String, NewlineIsSentenceBreak, SequencePattern, Set, boolean, boolean)
*/
// todo [cdm 2016]: Should really sort out generics here so don't need to have extra list copying
@Override
public List<List<IN>> process(List<? extends IN> words) {
if (isOneSentence) {
// put all the words in one sentence
List<List<IN>> sentences = Generics.newArrayList();
sentences.add(new ArrayList<>(words));
return sentences;
} else {
return wordsToSentences(words);
}
}
代码示例来源:origin: stanfordnlp/CoreNLP
public WordsToSentencesAnnotator(boolean verbose, String boundaryTokenRegex,
Set<String> boundaryToDiscard, Set<String> htmlElementsToDiscard,
String newlineIsSentenceBreak, String boundaryMultiTokenRegex,
Set<String> tokenRegexesToDiscard) {
this(verbose, false,
new WordToSentenceProcessor<>(boundaryTokenRegex, null,
boundaryToDiscard, htmlElementsToDiscard,
WordToSentenceProcessor.stringToNewlineIsSentenceBreak(newlineIsSentenceBreak),
(boundaryMultiTokenRegex != null) ? TokenSequencePattern.compile(boundaryMultiTokenRegex) : null, tokenRegexesToDiscard));
}
代码示例来源:origin: stanfordnlp/CoreNLP
String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
discardToken = matchesTokenPatternsToDiscard(word);
lastTokenWasNewline = false;
Boolean isb;
if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
newSentForced = true;
if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }
代码示例来源:origin: stanfordnlp/CoreNLP
public <L, F> Document<L, F, List<IN>> processDocument(Document<L, F, IN> in) {
Document<L, F, List<IN>> doc = in.blankDocument();
doc.addAll(process(in));
return doc;
}
代码示例来源:origin: stanfordnlp/CoreNLP
/** Return a WordsToSentencesAnnotator that never splits the token stream. You just get one sentence.
*
* @return A WordsToSentenceAnnotator.
*/
public static WordsToSentencesAnnotator nonSplitter() {
WordToSentenceProcessor<CoreLabel> wts = new WordToSentenceProcessor<>(true);
return new WordsToSentencesAnnotator(false, false, wts);
}
代码示例来源:origin: stanfordnlp/CoreNLP
private boolean matchesXmlBreakElementToDiscard(String word) {
return matches(xmlBreakElementsToDiscard, word);
}
代码示例来源:origin: edu.stanford.nlp/stanford-corenlp
String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
discardToken = matchesTokenPatternsToDiscard(word);
lastTokenWasNewline = false;
Boolean isb;
if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
newSentForced = true;
if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }
代码示例来源:origin: stanfordnlp/CoreNLP
List<List<IN>> sentences = wts.process(words);
String after = "";
IN last = null;
代码示例来源:origin: stanfordnlp/CoreNLP
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{"\n"}));
this.countLineNumbers = true;
this.wts = wts1;
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{System.lineSeparator(), "\n"}));
this.countLineNumbers = true;
this.wts = wts1;
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(new String[]{PTBTokenizer.getNewlineToken()}));
this.countLineNumbers = true;
this.wts = wts1;
if (Boolean.parseBoolean(isOneSentence)) { // this method treats null as false
WordToSentenceProcessor<CoreLabel> wts1 = new WordToSentenceProcessor<>(true);
this.countLineNumbers = false;
this.wts = wts1;
this.wts = new WordToSentenceProcessor<>(boundaryTokenRegex, boundaryFollowersRegex,
boundariesToDiscard, htmlElementsToDiscard,
WordToSentenceProcessor.stringToNewlineIsSentenceBreak(nlsb),
(boundaryMultiTokenRegex != null) ? TokenSequencePattern.compile(boundaryMultiTokenRegex) : null, tokenRegexesToDiscard);
代码示例来源:origin: stanfordnlp/CoreNLP
/** Return a WordsToSentencesAnnotator that splits on newlines (only), which are then deleted.
* This constructor counts the lines by putting in empty token lists for empty lines.
* It tells the underlying splitter to return empty lists of tokens
* and then treats those empty lists as empty lines. We don't
* actually include empty sentences in the annotation, though. But they
* are used in numbering the sentence. Only this constructor leads to
* empty sentences.
*
* @param nlToken Zero or more new line tokens, which might be a {@literal \n} or the fake
* newline tokens returned from the tokenizer.
* @return A WordsToSentenceAnnotator.
*/
public static WordsToSentencesAnnotator newlineSplitter(String... nlToken) {
// this constructor will keep empty lines as empty sentences
WordToSentenceProcessor<CoreLabel> wts =
new WordToSentenceProcessor<>(ArrayUtils.asImmutableSet(nlToken));
return new WordsToSentencesAnnotator(false, true, wts);
}
代码示例来源:origin: stanfordnlp/CoreNLP
private boolean matchesTokenPatternsToDiscard(String word) {
return matches(tokenPatternsToDiscard, word);
}
代码示例来源:origin: stanfordnlp/CoreNLP
public static void addEnhancedSentences(Annotation doc) {
//for every sentence that begins a paragraph: append this sentence and the previous one and see if sentence splitter would make a single sentence out of it. If so, add as extra sentence.
//for each sieve that potentially uses augmentedSentences in original:
List<CoreMap> sentences = doc.get(CoreAnnotations.SentencesAnnotation.class);
WordToSentenceProcessor wsp =
new WordToSentenceProcessor(WordToSentenceProcessor.NewlineIsSentenceBreak.NEVER); //create SentenceSplitter that never splits on newline
int prevParagraph = 0;
for(int i = 1; i < sentences.size(); i++) {
CoreMap sentence = sentences.get(i);
CoreMap prevSentence = sentences.get(i-1);
List<CoreLabel> tokensConcat = new ArrayList<>();
tokensConcat.addAll(prevSentence.get(CoreAnnotations.TokensAnnotation.class));
tokensConcat.addAll(sentence.get(CoreAnnotations.TokensAnnotation.class));
List<List<CoreLabel>> sentenceTokens = wsp.process(tokensConcat);
if(sentenceTokens.size() == 1) { //wsp would have put them into a single sentence --> add enhanced sentence.
sentence.set(EnhancedSentenceAnnotation.class, constructSentence(sentenceTokens.get(0), prevSentence, sentence));
}
}
}
代码示例来源:origin: edu.stanford.nlp/stanford-parser
String word = getString(o);
boolean forcedEnd = isForcedEndToken(o);
discardToken = matchesTokenPatternsToDiscard(word);
lastTokenWasNewline = false;
Boolean isb;
if (xmlBreakElementsToDiscard != null && matchesXmlBreakElementToDiscard(word)) {
newSentForced = true;
if (DEBUG) { log.info("Word is " + word + "; is XML break element; discarded"); }
代码示例来源:origin: stanfordnlp/CoreNLP
for (List<CoreLabel> sentenceTokens: wts.process(tokens)) {
if (countLineNumbers) {
++lineNumber;
代码示例来源:origin: edu.stanford.nlp/corenlp
/**
* For internal debugging purposes only.
*/
public static void main(String[] args) {
new BasicDocument<String>();
Document<String, Word, Word> htmlDoc = BasicDocument.init("top text <h1>HEADING text</h1> this is <p>new paragraph<br>next line<br/>xhtml break etc.");
System.out.println("Before:");
System.out.println(htmlDoc);
Document<String, Word, Word> txtDoc = new StripTagsProcessor<String, Word>(true).processDocument(htmlDoc);
System.out.println("After:");
System.out.println(txtDoc);
Document<String, Word, List<Word>> sentences = new WordToSentenceProcessor<Word>().processDocument(txtDoc);
System.out.println("Sentences:");
System.out.println(sentences);
}
}
代码示例来源:origin: com.guokr/stan-cn-com
public WordsToSentencesAnnotator(boolean verbose, String boundaryTokenRegex,
Set<String> boundaryToDiscard, Set<String> htmlElementsToDiscard,
String newlineIsSentenceBreak) {
this(verbose, false,
new WordToSentenceProcessor<CoreLabel>(boundaryTokenRegex,
boundaryToDiscard, htmlElementsToDiscard,
WordToSentenceProcessor.stringToNewlineIsSentenceBreak(newlineIsSentenceBreak)));
}
代码示例来源:origin: com.guokr/stan-cn-com
public WordsToSentencesAnnotator(boolean verbose) {
this(verbose, false, new WordToSentenceProcessor<CoreLabel>());
}
代码示例来源:origin: edu.stanford.nlp/corenlp
public List<List<IN>> process(List<? extends IN> words) {
if (isOneSentence) {
List<List<IN>> sentences = Generics.newArrayList();
sentences.add(new ArrayList<IN>(words));
return sentences;
} else {
return wordsToSentences(words);
}
}
我使用此方法进行了 .edu 电子邮件验证 - jQuery Form Validation, Only Allow .EDU Email Addresses 但我不想只使用 .edu 或 .edu.
我不明白它要我做什么。分配给 sentence正在工作: val sentences : java.util.List[CoreMap] = document.get(classOf[Sentence
嗨, 我正在尝试通过以下命令使用powershell连接到live @ edu。 发射命令: $ SessionNew = New-PSSession -ConfigurationName Micro
本文整理了Java中edu.stanford.nlp.process.WordToSentenceProcessor类的一些代码示例,展示了WordToSentenceProcessor类的具体用法。
我正在使用位于此处的 jquery 验证函数和插件。 http://docs.jquery.com/Plugins/Validation 我正在检查 js 文件,并且找到了电子邮件验证 block ,
大家好,美好的一天。 我有 JS 代码可以验证所有类型的电子邮件,但我想将电子邮件的验证限制为一种类型的电子邮件,例如:示例 @my.csun.edu 例如,我希望使用 @my.csun.edu 作为
我需要为我正在从事的一个项目编写一个函数,我们正在为这个项目制作一个仅供机构的学生、教职员工和校友访问的网站。 假设学校网站是:school.edu。 我在编写用于检查提交的电子邮件地址是否具有“sc
本文整理了Java中edu.isi.karma.controller.update.WorksheetListUpdate类的一些代码示例,展示了WorksheetListUpdate类的具体用法。这
本文整理了Java中edu.umd.cs.findbugs.ba.XField类的一些代码示例,展示了XField类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Mave
本文整理了Java中pl.edu.icm.yadda.service2.YaddaError类的一些代码示例,展示了YaddaError类的具体用法。这些代码示例主要来源于Github/Stackov
本文整理了Java中pl.edu.icm.yadda.common.YaddaException类的一些代码示例,展示了YaddaException类的具体用法。这些代码示例主要来源于Github/S
本文整理了Java中pl.edu.icm.ceon.commons.YaddaCollectionsUtils类的一些代码示例,展示了YaddaCollectionsUtils类的具体用法。这些代码示
本文整理了Java中pl.edu.icm.model.bwmeta.YElement类的一些代码示例,展示了YElement类的具体用法。这些代码示例主要来源于Github/Stackoverflow
本文整理了Java中pl.edu.icm.model.bwmeta.YContributor类的一些代码示例,展示了YContributor类的具体用法。这些代码示例主要来源于Github/Stack
所以我一直在思考如何正确使用正则表达式,我正在创建一个注册表单,其中使用的电子邮件必须包含 @pin.edu.sh。例如,如果用户决定使用。 johndoe@gmail.com,它不会接受,但是如果用
我在PC(Win 10 Edu,AMD 5 3600X 3.80 GHz,16 GB RAM,5700XT 8 GB GDDR)和PC笔记本Huawai Matebook X Pro(Win 10 H
我是一个菜鸟,但正在大力尝试简单地验证仅以“.edu”或“.ac”结尾的电子邮件地址,是否有一个简单的函数/脚本/解决方案来解决这个看似简单的问题?能够使用php、javascript或jquery。
给定一个列表, url = ["www.annauniv.edu", "www.google.com", "www.ndtv.com", "www.website.org", "www.bis.org
在应用程序线程转储中,我可以看到具有五个线程的线程池,如下所示: "pool-1-thread-5" prio=10 tid=0x000000000101a000 nid=0xe1f in Objec
本文整理了Java中edu.illinois.cs.cogcomp.sl.util.WeightVector类的一些代码示例,展示了WeightVector类的具体用法。这些代码示例主要来源于Gith
我是一名优秀的程序员,十分优秀!