gpt4 book ai didi

java - 斯坦福核心 nlp java 输出

转载 作者:太空狗 更新时间:2023-10-29 22:33:49 26 4
gpt4 key购买 nike

我是 Java 和 Stanford NLP 工具包的新手,正在尝试将它们用于一个项目。具体来说,我正在尝试使用 Stanford Corenlp 工具包来注释文本(使用 Netbeans 而不是命令行)并且我尝试使用 http://nlp.stanford.edu/software/corenlp.shtml#Usage 上提供的代码(使用 Stanford CoreNLP API)。问题是:谁能告诉我如何在文件中获取输出以便我可以进一步处理它?<​​/p>

我试过将图表和句子打印到控制台,只是为了查看内容。这样可行。基本上我需要的是返回带注释的文档,这样我就可以从我的主类中调用它并输出一个文本文件(如果可能的话)。我正在尝试查看 stanford corenlp 的 API,但鉴于我缺乏经验,我真的不知道返回此类信息的最佳方式是什么。

代码如下:

Properties props = new Properties();
props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

// read some text in the text variable
String text = "the quick fox jumps over the lazy dog";

// create an empty Annotation just with the given text
Annotation document = new Annotation(text);

// run all Annotators on this text
pipeline.annotate(document);

// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);

for(CoreMap sentence: sentences) {
// traversing the words in the current sentence
// a CoreLabel is a CoreMap with additional token-specific methods
for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
// this is the text of the token
String word = token.get(TextAnnotation.class);
// this is the POS tag of the token
String pos = token.get(PartOfSpeechAnnotation.class);
// this is the NER label of the token
String ne = token.get(NamedEntityTagAnnotation.class);
}

// this is the parse tree of the current sentence
Tree tree = sentence.get(TreeAnnotation.class);

// this is the Stanford dependency graph of the current sentence
SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
}

// This is the coreference link graph
// Each chain stores a set of mentions that link to each other,
// along with a method for getting the most representative mention
// Both sentence and token offsets start at 1!
Map<Integer, CorefChain> graph =
document.get(CorefChainAnnotation.class);

最佳答案

一旦您的代码示例中显示了任何或所有自然语言分析,您需要做的就是以普通的 Java 方式将它们发送到一个文件,例如,使用用于文本格式输出的 FileWriter。具体来说,这是一个简单的完整示例,显示发送到文件的输出(如果您为其提供适当的命令行参数):

import java.io.*;
import java.util.*;

import edu.stanford.nlp.io.*;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.util.*;

public class StanfordCoreNlpDemo {

public static void main(String[] args) throws IOException {
PrintWriter out;
if (args.length > 1) {
out = new PrintWriter(args[1]);
} else {
out = new PrintWriter(System.out);
}
PrintWriter xmlOut = null;
if (args.length > 2) {
xmlOut = new PrintWriter(args[2]);
}

StanfordCoreNLP pipeline = new StanfordCoreNLP();
Annotation annotation;
if (args.length > 0) {
annotation = new Annotation(IOUtils.slurpFileNoExceptions(args[0]));
} else {
annotation = new Annotation("Kosgi Santosh sent an email to Stanford University. He didn't get a reply.");
}

pipeline.annotate(annotation);
pipeline.prettyPrint(annotation, out);
if (xmlOut != null) {
pipeline.xmlPrint(annotation, xmlOut);
}
// An Annotation is a Map and you can get and use the various analyses individually.
// For instance, this gets the parse tree of the first sentence in the text.
List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
if (sentences != null && sentences.size() > 0) {
CoreMap sentence = sentences.get(0);
Tree tree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);
out.println();
out.println("The first sentence parsed is:");
tree.pennPrint(out);
}
}

}

关于java - 斯坦福核心 nlp java 输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11832490/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com