gpt4 book ai didi

Java 空指针异常 : Tokenizing Input for Lexer

转载 作者:行者123 更新时间:2023-12-01 11:06:22 26 4
gpt4 key购买 nike

当我在 CMD 提示符下运行 .jar 文件时,出现以下错误:

C:\Users\Mikael\My Documents\NetBeansProjects\cs413CompilerProject\dist>
java -jar "cs413CompilerProject.jar" "C:\Users\Mikael\Documents\NetBeansProjects\cs413
CompilerProject\cs413Compiler\simple.x"
User's current working directory:
C:\Users\Mikael\My Documents\NetBeansProjects\cs413CompilerProject\dist
java.io.FileNotFoundException: lexer\setup\tokens
(The system cannot find the path specified)
Exception in thread "main" java.lang.NullPointerException
at lexer.setup.TokenSetup.initTokenClasses(TokenSetup.java:77)
at lexer.setup.TokenSetup.main(TokenSetup.java:35)

它所引用的代码是(从 TokenSetup.java 的第 24 行开始,到第 35 行结束):

24  public class TokenSetup {
25 private String type, value; // token type/value for new token
26 private int tokenCount = 0;
27 private BufferedReader in;
28 private PrintWriter table, symbols; // files used for new classes
29
30 /**
31 *
32 * @param args
33 */
34 public static void main(String args[]) {
35 new TokenSetup().initTokenClasses();}

然后是TokenSetup.initTokenClasses下的其他引用:

77  public void initTokenClasses() {
78 table.println("/*********************");
79 table.println("*");
80 table.println("* @Author Mikael M");
81 //... print a bunch of things
}

完整代码:

package lexer.setup;

import java.util.*;
import java.io.*;
//
///**
// * TokenSetup class is used to read the tokens from file <i>tokens</i>
// * and automatically build the 2 classes/files <i>TokenType.java</i>
// * and <i>Sym.java</i><br>
// * Therefore, if there is any change to the tokens then we only need to
// * modify the file <i>tokens</i> and run this program again before using the
// * compiler
//*/
public class TokenSetup {
private String type, value; // token type/value for new token
private int tokenCount = 0;
private BufferedReader in;
private PrintWriter table, symbols; // files used for new classes

/**
*
* @param args
*/
public static void main(String args[]) {
new TokenSetup().initTokenClasses();
}

TokenSetup() {
try {
System.out.println("User's current working directory: " + System.getProperty("user.dir"));
String sep = System.getProperty("file.separator");
in = new BufferedReader( new FileReader("lexer" + sep + "setup" + sep + "tokens"));
table = new PrintWriter(new FileOutputStream("lexer" + sep + "TokenType.java"));
symbols = new PrintWriter(new FileOutputStream("lexer" + sep + "Tokens.java"));
} catch (Exception e) {
System.out.println(e);
}
}

///**
// * read next line which contains token information;<br>
// * each line will contain the token type used in lexical analysis and
// * the printstring of the token: e.g.<br><ul>
// * <li>Program program</li>
// * <li>Int int</li>
// * <li>BOOLean boolean</li></ul>
// * @throws IOException
// */
public void getNextToken() throws IOException {
try {
StringTokenizer st = new StringTokenizer(in.readLine());
type = st.nextToken();
value = st.nextToken();
} catch (NoSuchElementException e) {
System.out.println("***tokens file does not have 2 strings per line***");
System.exit(1);
} catch (NullPointerException ne) {
throw new IOException("***End of File***");
}
tokenCount++;
}

///**
// * initTokenClasses will create the 2 files
//*/
public void initTokenClasses() {
table.println("/*********************");
table.println("*");
table.println("* @Author Mikael C. Miller");
table.println("*");
table.println("* SFSU 9/20/15");
table.println("*");
table.println("* CSc 413");
table.println("*");
table.println("*/");
table.println("package lexer;");
table.println(" ");
table.println("/**");
table.println(" * This file is automatically generated<br>");
table.println(" * it contains the table of mappings from token");
table.println(" * constants to their Symbols");
table.println("*/");
table.println("public class TokenType {");
table.println(" public static java.util.HashMap<Tokens,Symbol> tokens = new java.util.HashMap<Tokens,Symbol>();");
table.println(" public TokenType() {");
symbols.println("package lexer;");
symbols.println(" ");
symbols.println("/**");
symbols.println(" * This file is automatically generated<br>");
symbols.println(" * - it contains the enumberation of all of the tokens");
symbols.println("*/");
symbols.println("public enum Tokens {");
symbols.print(" BogusToken");

while (true) {
try {
getNextToken();
} catch (IOException e) {break;}

String symType = "Tokens." + type;

table.println(" tokens.put(" + symType +
", Symbol.symbol(\"" + value + "\"," + symType + "));");

if (tokenCount % 5 == 0) {
symbols.print(",\n "+ type);
} else {
symbols.print("," + type);
}
}

table.println(" }");
table.println("}");
table.close();
symbols.println("\n}");
symbols.close();
try {
in.close();
} catch (Exception e) {}
}
}

最佳答案

TokenSetup的构造函数中,抛出FileNotFound异常,但除了将异常消息打印到System之外,您不执行任何操作.out。然后,您的构造函数返回,就好像一切正​​常一样,您的 main() 函数继续对 TokenSetup 部分初始化的实例调用 initTokenClasses() >。我什至不想去想结果会发生什么。我什至不打算调查它。这是无关紧要的。问题在于抛出的第一个异常,即 FileNotFound 异常。随后发生的 NullPointerException 是 red herring (Wikipedia) .

当你遇到异常时,你不能像什么都没发生一样继续进行。异常(exception)情况不能被掩盖。改为这样做:

public static void main(String args[]) throws Exception

    TokenSetup() throws Exception

如果您不知道如何处理异常,请停止 try catch 异常。

这样,当抛出异常时,您的程序会立即停止,而不是继续向下运行并不可避免地抛出更多异常,从而使您感到困惑。

关于Java 空指针异常 : Tokenizing Input for Lexer,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32905498/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com