在server类的handleConnection方法中处理客户端的输入,
首先调用allowConnection方法检测下客户端ip是否合法,ServerAcl类中可以查看,在server启动输入参数中可以指定acl 黑白名单ip文件,检测代码很简单,这里就不说明了,有兴趣的看下ServerAcl类。
protected boolean allowConnection(Socket socket) { if (isShuttingDown) { return false; } return (acl == null) ? true : acl.permitAccess( socket.getInetAddress().getAddress()); }
然后通过参数判断server启动模式,是hsql模式还是http模式,其实就是启动端口不一样,然后格式不一样而已。
if (serverProtocol == ServerConstants.SC_PROTOCOL_HSQL) { r = new ServerConnection(s, this); ctn = ((ServerConnection) r).getConnectionThreadName(); } else { r = new WebServerConnection(s, (WebServer) this); ctn = ((WebServerConnection) r).getConnectionThreadName(); } t = new Thread(serverConnectionThreadGroup, r, ctn); t.start();
启动一个新线程处理客户端连接,我们先看看ServerConnection的处理,每个ServerCOnnection会保存一个线程递增的id,然后往server添加当前连接
mThread = mCurrentThread.getAndIncrement(); synchronized (server.serverConnSet) { server.serverConnSet.add(this); }
看看run方法,init是处理第一次连接的请求,后面while循环处理该连接的下一个请求,最后close方法关闭连接。
public void run() { int msgType; init(); if (session != null) { try { while (keepAlive) { msgType = dataInput.readByte(); if (msgType < ResultConstants.MODE_UPPER_LIMIT) { receiveResult(msgType); } else { receiveOdbcPacket((char) msgType); } } ...... close(); }init方法,handshake 方法是查下当前连接请求时哪种类型的连接
private void init() { runnerThread = Thread.currentThread(); keepAlive = true; try { socket.setTcpNoDelay(true); dataInput = new DataInputStream( new BufferedInputStream(socket.getInputStream())); dataOutput = new DataOutputStream(socket.getOutputStream()); int firstInt = handshake(); switch (streamProtocol) { case HSQL_STREAM_PROTOCOL : if (firstInt != ClientConnection .NETWORK_COMPATIBILITY_VERSION_INT) { if (firstInt == -1900000) { firstInt = -2000000; } String verString = ClientConnection.toNetCompVersionString(firstInt); throw Error.error( null, ErrorCode.SERVER_VERSIONS_INCOMPATIBLE, 0, new String[] { verString, HsqlDatabaseProperties.hsqldb_version }); } Result resultIn = Result.newResult(dataInput, rowIn); resultIn.readAdditionalResults(session, dataInput, rowIn); Result resultOut; resultOut = setDatabase(resultIn); resultOut.write(session, dataOutput, rowOut); break; case ODBC_STREAM_PROTOCOL : odbcConnect(firstInt); break; default : // Protocol detection failures should already have been // handled. keepAlive = false; } } catch (Exception e) {前四个字节的最高位表示连接类型,这个整数还表示版本号,
int firstInt = dataInput.readInt(); switch (firstInt >> 24) { case 80 : // Empirically server.print( "Rejected attempt from client using hsql HTTP protocol"); return 0; case 0 : // For ODBC protocol, this is the first byte of a 4-byte int // size. The size can never be large enough that the first // byte will be non-zero. streamProtocol = ODBC_STREAM_PROTOCOL; break; default : streamProtocol = HSQL_STREAM_PROTOCOL; // HSQL protocol client }最重要的是中间的输入解析,查询和结果封装部分。具体看看
第一次的处理session为null,后面会赋值,看看Result resultIn = Result.newResult(dataInput, rowIn);的核心部分,第一个字节=LARGE_OBJECT_OP表示是查询大数据对象内容,后面再看具体字段
public static Result newResult(DataInput dataInput, RowInputBinary in) throws IOException, HsqlException { return newResult(null, dataInput.readByte(), dataInput, in); } public static Result newResult(Session session, int mode, DataInput dataInput, RowInputBinary in) throws IOException, HsqlException { try { if (mode == ResultConstants.LARGE_OBJECT_OP) { return ResultLob.newLob(dataInput, false); } Result result = newResult(session, dataInput, in, mode); return result; } catch (IOException e) { throw Error.error(ErrorCode.X_08000); } }newResult 先根据不同的请求类型或者各种类型需要的一些字段信息,前4个字节是请求长度,把整个长度内容copy到in中,然后可以用封装好的RowInputBinary读取字符串,整数等信息。
private static Result newResult(Session session, DataInput dataInput, RowInputBinary in, int mode) throws IOException, HsqlException { Result result = newResult(mode); int length = dataInput.readInt(); in.resetRow(0, length); byte[] byteArray = in.getBuffer(); final int offset = 4; dataInput.readFully(byteArray, offset, length - offset); switch (mode) { case ResultConstants.GETSESSIONATTR : result.statementReturnType = in.readByte(); break; case ResultConstants.DISCONNECT : case ResultConstants.RESETSESSION : case ResultConstants.STARTTRAN : break; case ResultConstants.PREPARE : result.setStatementType(in.readByte()); result.mainString = in.readString(); result.rsProperties = in.readByte(); result.generateKeys = in.readByte(); if (result.generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_NAMES || result .generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_INDEXES) { result.generatedMetaData = new ResultMetaData(in); } break; case ResultConstants.CLOSE_RESULT : result.id = in.readLong(); break; case ResultConstants.FREESTMT : result.statementID = in.readLong(); break; case ResultConstants.EXECDIRECT : result.updateCount = in.readInt(); result.fetchSize = in.readInt(); result.statementReturnType = in.readByte(); result.mainString = in.readString(); result.rsProperties = in.readByte(); result.queryTimeout = in.readShort(); result.generateKeys = in.readByte(); if (result.generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_NAMES || result .generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_INDEXES) { result.generatedMetaData = new ResultMetaData(in); } break; case ResultConstants.CONNECT : result.databaseName = in.readString(); result.mainString = in.readString(); result.subString = in.readString(); result.zoneString = in.readString(); result.updateCount = in.readInt(); break; case ResultConstants.ERROR : case ResultConstants.WARNING : result.mainString = in.readString(); result.subString = in.readString(); result.errorCode = in.readInt(); break; case ResultConstants.CONNECTACKNOWLEDGE : result.databaseID = in.readInt(); result.sessionID = in.readLong(); result.databaseName = in.readString(); result.mainString = in.readString(); break; case ResultConstants.UPDATECOUNT : result.updateCount = in.readInt(); break; case ResultConstants.ENDTRAN : { int type = in.readInt(); result.setActionType(type); // endtran type switch (type) { case ResultConstants.TX_SAVEPOINT_NAME_RELEASE : case ResultConstants.TX_SAVEPOINT_NAME_ROLLBACK : result.mainString = in.readString(); // savepoint name break; case ResultConstants.TX_COMMIT : case ResultConstants.TX_ROLLBACK : case ResultConstants.TX_COMMIT_AND_CHAIN : case ResultConstants.TX_ROLLBACK_AND_CHAIN : break; default : throw Error.runtimeError(ErrorCode.U_S0500, "Result"); } break; } case ResultConstants.SETCONNECTATTR : { int type = in.readInt(); // attr type result.setConnectionAttrType(type); switch (type) { case ResultConstants.SQL_ATTR_SAVEPOINT_NAME : result.mainString = in.readString(); // savepoint name break; // case ResultConstants.SQL_ATTR_AUTO_IPD : // - always true // default: throw - case never happens default : throw Error.runtimeError(ErrorCode.U_S0500, "Result"); } break; } case ResultConstants.PREPARE_ACK : result.statementReturnType = in.readByte(); result.statementID = in.readLong(); result.rsProperties = in.readByte(); result.metaData = new ResultMetaData(in); result.parameterMetaData = new ResultMetaData(in); break; case ResultConstants.CALL_RESPONSE : result.updateCount = in.readInt(); result.fetchSize = in.readInt(); result.statementID = in.readLong(); result.statementReturnType = in.readByte(); result.rsProperties = in.readByte(); result.metaData = new ResultMetaData(in); result.valueData = readSimple(in, result.metaData); break; case ResultConstants.EXECUTE : result.updateCount = in.readInt(); result.fetchSize = in.readInt(); result.statementID = in.readLong(); result.rsProperties = in.readByte(); result.queryTimeout = in.readShort(); Statement statement = session.statementManager.getStatement(session, result.statementID); if (statement == null) { // invalid statement result.mode = ResultConstants.EXECUTE_INVALID; result.valueData = ValuePool.emptyObjectArray; break; } result.statement = statement; result.metaData = result.statement.getParametersMetaData(); result.valueData = readSimple(in, result.metaData); break; case ResultConstants.UPDATE_RESULT : { result.id = in.readLong(); int type = in.readInt(); result.setActionType(type); result.metaData = new ResultMetaData(in); result.valueData = readSimple(in, result.metaData); break; } case ResultConstants.BATCHEXECRESPONSE : case ResultConstants.BATCHEXECUTE : case ResultConstants.BATCHEXECDIRECT : case ResultConstants.SETSESSIONATTR : { result.updateCount = in.readInt(); result.fetchSize = in.readInt(); result.statementID = in.readLong(); result.queryTimeout = in.readShort(); result.metaData = new ResultMetaData(in); result.navigator.readSimple(in, result.metaData); break; } case ResultConstants.PARAM_METADATA : { result.metaData = new ResultMetaData(in); result.navigator.read(in, result.metaData); break; } case ResultConstants.REQUESTDATA : { result.id = in.readLong(); result.updateCount = in.readInt(); result.fetchSize = in.readInt(); break; } case ResultConstants.DATAHEAD : case ResultConstants.DATA : case ResultConstants.GENERATED : { result.id = in.readLong(); result.updateCount = in.readInt(); result.fetchSize = in.readInt(); result.rsProperties = in.readByte(); result.metaData = new ResultMetaData(in); result.navigator = new RowSetNavigatorClient(); result.navigator.read(in, result.metaData); break; } case ResultConstants.DATAROWS : { result.metaData = new ResultMetaData(in); result.navigator = new RowSetNavigatorClient(); result.navigator.read(in, result.metaData); break; } default : throw Error.runtimeError(ErrorCode.U_S0500, "Result"); } return result; }readAdditionalResults 是读取预取数据,取下一部分数据的处理;
第一次是connect,则会调用setdatabse设置连接的session信息(数据库等)
private Result setDatabase(Result resultIn) { try { String databaseName = resultIn.getDatabaseName(); dbIndex = server.getDBIndex(databaseName); dbID = server.dbID[dbIndex]; user = resultIn.getMainString(); if (!server.isSilent()) { server.printWithThread(mThread + ":Trying to connect user '" + user + "' to DB (" + databaseName + ')'); } session = DatabaseManager.newSession(dbID, user, resultIn.getSubString(), resultIn.getZoneString(), resultIn.getUpdateCount()); if (!server.isSilent()) { server.printWithThread(mThread + ":Connected user '" + user + "'"); } return Result.newConnectionAcknowledgeResponse( session.getDatabase(), session.getId(), session.getDatabase().getDatabaseID()); } catch (HsqlException e) { session = null; return Result.newErrorResult(e); } catch (RuntimeException e) { session = null; return Result.newErrorResult(e); } }DataBaseManager的newsession
public static Session newSession(int dbID, String user, String password, String zoneString, int timeZoneSeconds) { Database db = null; synchronized (databaseIDMap) { db = (Database) databaseIDMap.get(dbID); } if (db == null) { return null; } Session session = db.connect(user, password, zoneString, timeZoneSeconds); session.isNetwork = true; return session; }最终调用Session类的new Session,默认事务是自动提交的。readonly是对应数据库设置是否为只读
Session(Database db, User user, boolean autocommit, boolean readonly, long id, String zoneString, int timeZoneSeconds) { sessionId = id; database = db; this.user = user; this.sessionUser = user; this.zoneString = zoneString; this.sessionTimeZoneSeconds = timeZoneSeconds; this.timeZoneSeconds = timeZoneSeconds; rowActionList = new HsqlArrayList(32, true); waitedSessions = new OrderedHashSet(); waitingSessions = new OrderedHashSet(); tempSet = new OrderedHashSet(); isolationLevelDefault = database.defaultIsolationLevel; isolationLevel = isolationLevelDefault; txConflictRollback = database.txConflictRollback; isReadOnlyDefault = readonly; isReadOnlyIsolation = isolationLevel == SessionInterface.TX_READ_UNCOMMITTED; sessionContext = new SessionContext(this); sessionContext.isAutoCommit = autocommit ? Boolean.TRUE : Boolean.FALSE; sessionContext.isReadOnly = isReadOnlyDefault ? Boolean.TRUE : Boolean.FALSE; parser = new ParserCommand(this, new Scanner()); setResultMemoryRowCount(database.getResultMaxMemoryRows()); resetSchema(); sessionData = new SessionData(database, this); statementManager = new StatementManager(database); }sessionManager当然会保存当前的session和session连接数
Session s = new Session(db, user, autoCommit, readonly, sessionIdCount, zoneString, timeZoneSeconds); sessionMap.put(sessionIdCount, s); sessionIdCount++;最后write
public void write(SessionInterface session, DataOutputStream dataOut, RowOutputInterface rowOut) throws IOException, HsqlException { rowOut.reset(); rowOut.writeByte(mode); int startPos = rowOut.size(); rowOut.writeSize(0); switch (mode) { case ResultConstants.GETSESSIONATTR : rowOut.writeByte(statementReturnType); break; case ResultConstants.DISCONNECT : case ResultConstants.RESETSESSION : case ResultConstants.STARTTRAN : break; case ResultConstants.PREPARE : rowOut.writeByte(statementReturnType); rowOut.writeString(mainString); rowOut.writeByte(rsProperties); rowOut.writeByte(generateKeys); if (generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_NAMES || generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_INDEXES) { generatedMetaData.write(rowOut); } break; case ResultConstants.FREESTMT : rowOut.writeLong(statementID); break; case ResultConstants.CLOSE_RESULT : rowOut.writeLong(id); break; case ResultConstants.EXECDIRECT : rowOut.writeInt(updateCount); rowOut.writeInt(fetchSize); rowOut.writeByte(statementReturnType); rowOut.writeString(mainString); rowOut.writeByte(rsProperties); rowOut.writeShort(queryTimeout); rowOut.writeByte(generateKeys); if (generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_NAMES || generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_INDEXES) { generatedMetaData.write(rowOut); } break; case ResultConstants.CONNECT : rowOut.writeString(databaseName); rowOut.writeString(mainString); rowOut.writeString(subString); rowOut.writeString(zoneString); rowOut.writeInt(updateCount); break; case ResultConstants.ERROR : case ResultConstants.WARNING : rowOut.writeString(mainString); rowOut.writeString(subString); rowOut.writeInt(errorCode); break; case ResultConstants.CONNECTACKNOWLEDGE : rowOut.writeInt(databaseID); rowOut.writeLong(sessionID); rowOut.writeString(databaseName); rowOut.writeString(mainString); break; case ResultConstants.UPDATECOUNT : rowOut.writeInt(updateCount); break; case ResultConstants.ENDTRAN : { int type = getActionType(); rowOut.writeInt(type); // endtran type switch (type) { case ResultConstants.TX_SAVEPOINT_NAME_RELEASE : case ResultConstants.TX_SAVEPOINT_NAME_ROLLBACK : rowOut.writeString(mainString); // savepoint name break; case ResultConstants.TX_COMMIT : case ResultConstants.TX_ROLLBACK : case ResultConstants.TX_COMMIT_AND_CHAIN : case ResultConstants.TX_ROLLBACK_AND_CHAIN : break; default : throw Error.runtimeError(ErrorCode.U_S0500, "Result"); } break; } case ResultConstants.PREPARE_ACK : rowOut.writeByte(statementReturnType); rowOut.writeLong(statementID); rowOut.writeByte(rsProperties); metaData.write(rowOut); parameterMetaData.write(rowOut); break; case ResultConstants.CALL_RESPONSE : rowOut.writeInt(updateCount); rowOut.writeInt(fetchSize); rowOut.writeLong(statementID); rowOut.writeByte(statementReturnType); rowOut.writeByte(rsProperties); metaData.write(rowOut); writeSimple(rowOut, metaData, (Object[]) valueData); break; case ResultConstants.EXECUTE : rowOut.writeInt(updateCount); rowOut.writeInt(fetchSize); rowOut.writeLong(statementID); rowOut.writeByte(rsProperties); rowOut.writeShort(queryTimeout); writeSimple(rowOut, metaData, (Object[]) valueData); break; case ResultConstants.UPDATE_RESULT : rowOut.writeLong(id); rowOut.writeInt(getActionType()); metaData.write(rowOut); writeSimple(rowOut, metaData, (Object[]) valueData); break; case ResultConstants.BATCHEXECRESPONSE : case ResultConstants.BATCHEXECUTE : case ResultConstants.BATCHEXECDIRECT : case ResultConstants.SETSESSIONATTR : { rowOut.writeInt(updateCount); rowOut.writeInt(fetchSize); rowOut.writeLong(statementID); rowOut.writeShort(queryTimeout); metaData.write(rowOut); navigator.writeSimple(rowOut, metaData); break; } case ResultConstants.PARAM_METADATA : { metaData.write(rowOut); navigator.write(rowOut, metaData); break; } case ResultConstants.SETCONNECTATTR : { int type = getConnectionAttrType(); rowOut.writeInt(type); // attr type / updateCount switch (type) { case ResultConstants.SQL_ATTR_SAVEPOINT_NAME : rowOut.writeString(mainString); // savepoint name break; // case ResultConstants.SQL_ATTR_AUTO_IPD // always true // default: // throw, but case never happens default : throw Error.runtimeError(ErrorCode.U_S0500, "Result"); } break; } case ResultConstants.REQUESTDATA : { rowOut.writeLong(id); rowOut.writeInt(updateCount); rowOut.writeInt(fetchSize); break; } case ResultConstants.DATAROWS : metaData.write(rowOut); navigator.write(rowOut, metaData); break; case ResultConstants.DATAHEAD : case ResultConstants.DATA : case ResultConstants.GENERATED : rowOut.writeLong(id); rowOut.writeInt(updateCount); rowOut.writeInt(fetchSize); rowOut.writeByte(rsProperties); metaData.write(rowOut); navigator.write(rowOut, metaData); break; default : throw Error.runtimeError(ErrorCode.U_S0500, "Result"); } rowOut.writeIntData(rowOut.size() - startPos, startPos); dataOut.write(rowOut.getOutputStream().getBuffer(), 0, rowOut.size()); int count = getLobCount(); Result current = this; for (int i = 0; i < count; i++) { ResultLob lob = current.lobResults; lob.writeBody(session, dataOut); current = current.lobResults; } if (chainedResult == null) { dataOut.writeByte(ResultConstants.NONE); } else { chainedResult.write(session, dataOut, rowOut); } dataOut.flush(); }
然后在server的run方法中循环中处理完连接请求之后就可以执行数据库操作了。msgType大于 MODE_UPPER_LIMIT表示不是hsqldb的包类型而是odbc包类型
while (keepAlive) { msgType = dataInput.readByte(); if (msgType < ResultConstants.MODE_UPPER_LIMIT) { receiveResult(msgType); } else { receiveOdbcPacket((char) msgType); } }
private void receiveResult(int resultMode) throws CleanExit, IOException { boolean terminate = false; Result resultIn = Result.newResult(session, resultMode, dataInput, rowIn); resultIn.readLobResults(session, dataInput, rowIn); server.printRequest(mThread, resultIn); Result resultOut = null; switch (resultIn.getType()) { case ResultConstants.CONNECT : { resultOut = setDatabase(resultIn); break; } case ResultConstants.DISCONNECT : { resultOut = Result.updateZeroResult; terminate = true; break; } case ResultConstants.RESETSESSION : { session.resetSession(); resultOut = Result.updateZeroResult; break; } case ResultConstants.EXECUTE_INVALID : { resultOut = Result.newErrorResult(Error.error(ErrorCode.X_07502)); break; } default : { resultOut = session.execute(resultIn); break; } } resultOut.write(session, dataOutput, rowOut); rowOut.reset(mainBuffer); rowIn.resetRow(mainBuffer.length); if (terminate) { throw cleanExit; } }session的excute方法执行的主流程
public synchronized Result execute(Result cmd) { if (isClosed) { return Result.newErrorResult(Error.error(ErrorCode.X_08503)); } sessionContext.currentMaxRows = 0; isBatch = false; JavaSystem.gc(); switch (cmd.mode) { case ResultConstants.LARGE_OBJECT_OP : { return performLOBOperation((ResultLob) cmd); } case ResultConstants.EXECUTE : { int maxRows = cmd.getUpdateCount(); if (maxRows == -1) { sessionContext.currentMaxRows = 0; } else { sessionContext.currentMaxRows = maxRows; } Statement cs = cmd.statement; if (cs == null || cs.compileTimestamp < database.schemaManager.schemaChangeTimestamp) { long csid = cmd.getStatementID(); cs = statementManager.getStatement(this, csid); cmd.setStatement(cs); if (cs == null) { // invalid sql has been removed already return Result.newErrorResult( Error.error(ErrorCode.X_07502)); } } Object[] pvals = (Object[]) cmd.valueData; Result result = executeCompiledStatement(cs, pvals); result = performPostExecute(cmd, result); return result; } case ResultConstants.BATCHEXECUTE : { isBatch = true; Result result = executeCompiledBatchStatement(cmd); result = performPostExecute(cmd, result); return result; } case ResultConstants.EXECDIRECT : { Result result = executeDirectStatement(cmd); result = performPostExecute(cmd, result); return result; } case ResultConstants.BATCHEXECDIRECT : { isBatch = true; Result result = executeDirectBatchStatement(cmd); result = performPostExecute(cmd, result); return result; } case ResultConstants.PREPARE : { Statement cs; try { cs = statementManager.compile(this, cmd); } catch (Throwable t) { String errorString = cmd.getMainString(); if (database.getProperties().getErrorLevel() == HsqlDatabaseProperties.NO_MESSAGE) { errorString = null; } return Result.newErrorResult(t, errorString); } Result result = Result.newPrepareResponse(cs); if (cs.getType() == StatementTypes.SELECT_CURSOR || cs.getType() == StatementTypes.CALL) { sessionData.setResultSetProperties(cmd, result); } result = performPostExecute(cmd, result); return result; } case ResultConstants.CLOSE_RESULT : { closeNavigator(cmd.getResultId()); return Result.updateZeroResult; } case ResultConstants.UPDATE_RESULT : { Result result = this.executeResultUpdate(cmd); result = performPostExecute(cmd, result); return result; } case ResultConstants.FREESTMT : { statementManager.freeStatement(cmd.getStatementID()); return Result.updateZeroResult; } case ResultConstants.GETSESSIONATTR : { int id = cmd.getStatementType(); return getAttributesResult(id); } case ResultConstants.SETSESSIONATTR : { return setAttributes(cmd); } case ResultConstants.ENDTRAN : { switch (cmd.getActionType()) { case ResultConstants.TX_COMMIT : try { commit(false); } catch (Throwable t) { return Result.newErrorResult(t); } break; case ResultConstants.TX_COMMIT_AND_CHAIN : try { commit(true); } catch (Throwable t) { return Result.newErrorResult(t); } break; case ResultConstants.TX_ROLLBACK : rollback(false); break; case ResultConstants.TX_ROLLBACK_AND_CHAIN : rollback(true); break; case ResultConstants.TX_SAVEPOINT_NAME_RELEASE : try { String name = cmd.getMainString(); releaseSavepoint(name); } catch (Throwable t) { return Result.newErrorResult(t); } break; case ResultConstants.TX_SAVEPOINT_NAME_ROLLBACK : try { rollbackToSavepoint(cmd.getMainString()); } catch (Throwable t) { return Result.newErrorResult(t); } break; case ResultConstants.PREPARECOMMIT : try { prepareCommit(); } catch (Throwable t) { return Result.newErrorResult(t); } break; } return Result.updateZeroResult; } case ResultConstants.SETCONNECTATTR : { switch (cmd.getConnectionAttrType()) { case ResultConstants.SQL_ATTR_SAVEPOINT_NAME : try { savepoint(cmd.getMainString()); } catch (Throwable t) { return Result.newErrorResult(t); } // case ResultConstants.SQL_ATTR_AUTO_IPD // - always true // default: throw - case never happens } return Result.updateZeroResult; } case ResultConstants.REQUESTDATA : { return sessionData.getDataResultSlice(cmd.getResultId(), cmd.getUpdateCount(), cmd.getFetchSize()); } case ResultConstants.DISCONNECT : { close(); return Result.updateZeroResult; } default : { return Result.newErrorResult( Error.runtimeError(ErrorCode.U_S0500, "Session")); } } }
比如使用下面方式执行:
String ddl0 = "DROP TABLE ADDRESSBOOK IF EXISTS; DROP TABLE ADDRESSBOOK_CATEGORY IF EXISTS; DROP TABLE USER IF EXISTS;"; String ddl1 = "CREATE TABLE USER(USER_ID INTEGER NOT NULL PRIMARY KEY,LOGIN_ID VARCHAR(128) NOT NULL,USER_NAME VARCHAR(254) DEFAULT ' ' NOT NULL,CREATE_DATE TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,UPDATE_DATE TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,LAST_ACCESS_DATE TIMESTAMP,CONSTRAINT IXUQ_LOGIN_ID0 UNIQUE(LOGIN_ID))"; String result1 = "1"; String result2 = "2"; stmnt.execute(ddl0); stmnt.execute(ddl1);则Result内容如下:
case ResultConstants.EXECDIRECT : result.updateCount = in.readInt(); result.fetchSize = in.readInt(); result.statementReturnType = in.readByte(); result.mainString = in.readString(); result.rsProperties = in.readByte(); result.queryTimeout = in.readShort(); result.generateKeys = in.readByte(); if (result.generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_NAMES || result .generateKeys == ResultConstants .RETURN_GENERATED_KEYS_COL_INDEXES) { result.generatedMetaData = new ResultMetaData(in); } break;mainString就是sql语句,然后就是执行过程了
sesseion的execute方法执行片段为
case ResultConstants.EXECDIRECT : { Result result = executeDirectStatement(cmd); result = performPostExecute(cmd, result); return result; }来到executeDirectStatement了,
public Result executeDirectStatement(Result cmd) { String sql = cmd.getMainString(); HsqlArrayList list; int maxRows = cmd.getUpdateCount(); if (maxRows == -1) { sessionContext.currentMaxRows = 0; } else if (sessionMaxRows == 0) { sessionContext.currentMaxRows = maxRows; } else { sessionContext.currentMaxRows = sessionMaxRows; sessionMaxRows = 0; } try { list = parser.compileStatements(sql, cmd); } catch (Throwable e) { return Result.newErrorResult(e); } Result result = null; for (int i = 0; i < list.size(); i++) { Statement cs = (Statement) list.get(i); cs.setGeneratedColumnInfo(cmd.getGeneratedResultType(), cmd.getGeneratedResultMetaData()); result = executeCompiledStatement(cs, ValuePool.emptyObjectArray); if (result.mode == ResultConstants.ERROR) { break; } } return result; }终于看到parser.compileStatements(sql, cmd) 解析sql的语法分析过程了,parser是在new session中初始化的,
ParserCommand类的compileStatements方法
HsqlArrayList compileStatements(String sql, Result cmd) { HsqlArrayList list = new HsqlArrayList(); Statement cs = null; reset(sql); while (true) { if (token.tokenType == Tokens.X_ENDPARSE) { break; } try { lastError = null; cs = compilePart(cmd.getExecuteProperties()); } catch (HsqlException e) { if (lastError != null && lastError.getLevel() > e.getLevel()) { throw lastError; } throw e; } if (!cs.isExplain && cs.getParametersMetaData().getColumnCount() > 0) { throw Error.error(ErrorCode.X_42575); } cs.setCompileTimestamp( database.txManager.getGlobalChangeTimestamp()); list.add(cs); } int returnType = cmd.getStatementType(); if (returnType != StatementTypes.RETURN_ANY) { int group = cs.getGroup(); if (group == StatementTypes.X_SQL_DATA) { if (returnType == StatementTypes.RETURN_COUNT) { throw Error.error(ErrorCode.X_07503); } } else if (returnType == StatementTypes.RETURN_RESULT) { throw Error.error(ErrorCode.X_07504); } } return list; }好吧,compilePart里面支持的所有的DDL数据库操作语言
private Statement compilePart(int props) { Statement cs; compileContext.reset(); setParsePosition(getPosition()); if (token.tokenType == Tokens.X_STARTPARSE) { read(); } switch (token.tokenType) { // DQL case Tokens.WITH : case Tokens.OPENBRACKET : case Tokens.SELECT : case Tokens.TABLE : { cs = compileCursorSpecification(RangeGroup.emptyArray, props, false); break; } case Tokens.VALUES : { RangeGroup[] ranges = new RangeGroup[]{ new RangeGroupSimple( session.sessionContext.sessionVariablesRange) }; compileContext.setOuterRanges(ranges); cs = compileShortCursorSpecification(props); break; } // DML case Tokens.INSERT : { cs = compileInsertStatement(RangeGroup.emptyArray); break; } case Tokens.UPDATE : { cs = compileUpdateStatement(RangeGroup.emptyArray); break; } case Tokens.MERGE : { cs = compileMergeStatement(RangeGroup.emptyArray); break; } case Tokens.DELETE : { cs = compileDeleteStatement(RangeGroup.emptyArray); break; } case Tokens.TRUNCATE : { cs = compileTruncateStatement(); break; } // PROCEDURE case Tokens.CALL : { cs = compileCallStatement( new RangeGroup[]{ new RangeGroupSimple( session.sessionContext .sessionVariablesRange) }, false); break; } // SQL SESSION case Tokens.SET : cs = compileSet(); break; // diagnostic case Tokens.GET : cs = compileGetStatement( session.sessionContext.sessionVariablesRange); break; case Tokens.START : cs = compileStartTransaction(); break; case Tokens.COMMIT : cs = compileCommit(); break; case Tokens.ROLLBACK : cs = compileRollback(); break; case Tokens.SAVEPOINT : cs = compileSavepoint(); break; case Tokens.RELEASE : cs = compileReleaseSavepoint(); break; // DDL case Tokens.CREATE : cs = compileCreate(); break; case Tokens.ALTER : cs = compileAlter(); break; case Tokens.DROP : cs = compileDrop(); break; case Tokens.GRANT : case Tokens.REVOKE : cs = compileGrantOrRevoke(); break; case Tokens.COMMENT : cs = compileComment(); break; // HSQL SESSION case Tokens.LOCK : cs = compileLock(); break; case Tokens.CONNECT : cs = compileConnect(); break; case Tokens.DISCONNECT : cs = compileDisconnect(); break; // HSQL COMMAND case Tokens.SCRIPT : cs = compileScript(); break; case Tokens.SHUTDOWN : cs = compileShutdown(); break; case Tokens.BACKUP : cs = compileBackup(); break; case Tokens.CHECKPOINT : cs = compileCheckpoint(); break; case Tokens.EXPLAIN : { int position = getPosition(); cs = compileExplainPlan(); cs.setSQL(getLastPart(position)); break; } case Tokens.DECLARE : cs = compileDeclare(); break; default : throw unexpectedToken(); } // SET_SESSION_AUTHORIZATION is translated dynamically at runtime for logging switch (cs.type) { // these are set at compile time for logging case StatementTypes.COMMIT_WORK : case StatementTypes.ROLLBACK_WORK : case StatementTypes.SET_USER_PASSWORD : case StatementTypes.EXPLAIN_PLAN : break; default : cs.setSQL(getLastPart()); } if (token.tokenType == Tokens.SEMICOLON) { read(); } else if (token.tokenType == Tokens.X_ENDPARSE) {} return cs; }我们的语句是create的,我们看看compileCreate方法
StatementSchema compileCreate() { int tableType = TableBase.MEMORY_TABLE; boolean isTable = false; boolean isOrReplace = false; read(); switch (token.tokenType) { case Tokens.GLOBAL : read(); readThis(Tokens.TEMPORARY); readIfThis(Tokens.MEMORY); readThis(Tokens.TABLE); isTable = true; tableType = TableBase.TEMP_TABLE; break; case Tokens.TEMP : read(); readThis(Tokens.TABLE); isTable = true; tableType = TableBase.TEMP_TABLE; break; case Tokens.TEMPORARY : read(); readThis(Tokens.TABLE); isTable = true; tableType = TableBase.TEMP_TABLE; break; case Tokens.MEMORY : read(); readThis(Tokens.TABLE); isTable = true; break; case Tokens.CACHED : read(); readThis(Tokens.TABLE); isTable = true; tableType = TableBase.CACHED_TABLE; break; case Tokens.TEXT : read(); readThis(Tokens.TABLE); isTable = true; tableType = TableBase.TEXT_TABLE; break; case Tokens.TABLE : read(); isTable = true; tableType = database.schemaManager.getDefaultTableType(); break; case Tokens.OR : if (database.sqlSyntaxOra) { read(); readThis(Tokens.REPLACE); switch (token.tokenType) { case Tokens.FUNCTION : case Tokens.PROCEDURE : case Tokens.TRIGGER : case Tokens.TYPE : case Tokens.VIEW : break; default : throw unexpectedToken(Tokens.T_OR); } isOrReplace = true; } default : } if (isTable) { return compileCreateTable(tableType); } switch (token.tokenType) { // other objects case Tokens.ALIAS : return compileCreateAlias(); case Tokens.SEQUENCE : return compileCreateSequence(); case Tokens.SCHEMA : return compileCreateSchema(); case Tokens.TRIGGER : return compileCreateTrigger(isOrReplace); case Tokens.USER : return compileCreateUser(); case Tokens.ROLE : return compileCreateRole(); case Tokens.VIEW : return compileCreateView(false, isOrReplace); case Tokens.DOMAIN : return compileCreateDomain(); case Tokens.TYPE : return compileCreateType(isOrReplace); case Tokens.CHARACTER : return compileCreateCharacterSet(); case Tokens.COLLATION : return compileCreateCollation(); // index case Tokens.UNIQUE : read(); checkIsThis(Tokens.INDEX); return compileCreateIndex(true); case Tokens.INDEX : return compileCreateIndex(false); case Tokens.AGGREGATE : case Tokens.FUNCTION : case Tokens.PROCEDURE : return compileCreateProcedureOrFunction(isOrReplace); default : { throw unexpectedToken(); } } }
StatementSchema compileCreateTable(int type) { boolean ifNot = false; if (token.tokenType == Tokens.IF) { int position = getPosition(); read(); if (token.tokenType == Tokens.NOT) { read(); readThis(Tokens.EXISTS); ifNot = true; } else { rewind(position); } } HsqlName name = readNewSchemaObjectName(SchemaObject.TABLE, false); name.setSchemaIfNull(session.getCurrentSchemaHsqlName()); Table table; switch (type) { case TableBase.TEMP_TEXT_TABLE : case TableBase.TEXT_TABLE : { table = new TextTable(database, name, type); break; } default : { table = new Table(database, name, type); } } return compileCreateTableBody(table, ifNot); }我们进入了Table类,database数据库对象,name表名,type表类型compileCreateTableBody编译表字段内容,
相关推荐
以下是对HSQLDB的缓存分析和调试步骤的详细解释。 ### 一、实验环境 在进行HSQLDB的分析之前,我们需要一个合适的实验环境。这通常包括安装Java Development Kit (JDK),配置好环境变量,以及下载并设置HSQLDB的...
由于HSQLDB是开源的,开发者可以深入研究其源码,理解数据库的内部工作原理,如查询解析、执行计划生成、事务管理等,这对提升数据库相关的技术能力非常有帮助。 总之,HSQLDB作为一个轻量级、高性能的数据库,广泛...
2. **HSQldb启动** HSQldb提供三种主要的启动模式: - **Server模式**:这种模式下,HSQldb作为独立的服务运行,类似于MySQL或Oracle。通过指定`java -cp hsqldb.jar org.hsqldb.Server`的命令行参数,如`-...
《开源数据库软件HSQldb深度解析》 HSQldb,全称HyperSQL Database,是一款完全开源、免费的Java实现的关系型数据库管理系统(RDBMS),它支持多种运行环境,包括独立服务器模式、嵌入式模式以及Web应用。HSQldb因...
src:HSQLDB数据库的最新源代码,在源代码中附加了轻松分析理解代码的注释 把代码引入Eclipse 运行mvn eclipse:eclipse生成Eclipse项目,打开Eclipse,选择File-> Import-> Existing Projects into Workspace 运行...
2. **SQL问题**:探讨了HSQLDB对SQL标准的支持程度,包括对各种SQL语句(如SELECT、UPDATE等)的解释和示例,以及如何使用HSQLDB来管理数据完整性约束。 3. **UNIX下快速起步**:提供了一套适用于UNIX系统的快速入门...
**HSQldb 2.25 知识点详解** HSQldb,全称为HyperSQL Database,是一款开源、轻量级、嵌入式的关系型数据库管理系统。它支持标准的SQL语法,包括SQL-92和SQL:2003,且在Java环境中运行,无需依赖外部操作系统服务。...
5. 其他工具:HSQldb可能包含一些用于数据导入/导出、数据库管理或性能分析的工具,这些工具可能依赖于额外的JAR文件。 使用这些额外的JAR文件时,我们需要确保它们与HSQldb的核心库以及其他依赖库兼容。通常,我们...
### 源码分析 HSQLDB是用纯Java编写的,因此对于开发者来说,查看源码可以深入了解其工作原理。主要的类包括`org.hsqldb.Server`(服务器进程)、`org.hsqldb.jdbc.JDBCConnection`(JDBC连接)以及`org.hsqldb....
**HSQldb 2.2.8 数据库详解** HSQldb(HyperSQL Database)是一款高效、轻量级且开源的Java数据库管理系统,它在IT领域中被广泛应用于开发、测试以及小型应用环境。HSQldb完全用Java编写,因此具有良好的跨平台性,...
2. **增强的SQL支持**:HSQldb致力于遵循SQL标准,1.9.0版本可能增加了对更多SQL92和SQL99标准的支持,比如窗口函数、子查询优化等,这将使开发者能够编写更复杂的查询语句。 3. **安全性改进**:新的版本可能提升...
hsqldb 2 3 2 zip HyperSQL是用Java编写的一款SQL关系数据库引擎 它的核心完全是多线程的 支持双向锁和MVCC 多版本并发控制 几乎完整支持ANSI 92 SQL 支持常见数据类型 最新版本增加了对BLOB和CLOB数据的支持 最高...
2. **轻量级**:HSQLDB体积小,易于部署,对硬件资源需求低,适合小型应用和开发测试环境。 3. **高性能**:采用内存模式和磁盘模式两种运行方式,内存模式下性能极佳,而磁盘模式则提供了持久化的数据存储。 4. *...
hsqldb数据库下载,很好用,简易的内存数据库,特别适合初学者。
**源码分析** HSQLDB是用Java编写的,因此其源码可读性较高,对于学习数据库原理和实现有很高的价值。通过阅读源码,开发者可以深入理解SQL的执行流程、事务管理、索引构建等核心概念。 **工具支持** HSQLDB提供...
2. **小型化与高效**:由于其轻量级的特性,HSQldb 占用资源少,启动迅速,适用于嵌入式环境和小规模应用。 3. **多模式支持**:HSQldb 支持内存模式(所有数据都驻留在内存中),文件模式(数据存储在磁盘文件中)...
《HSQldb 2.3.3:轻量级数据库引擎深度解析》 HSQldb,全称为HyperSQL Database,是一款开源、纯Java语言编写的轻量级关系型数据库管理系统,广泛应用于测试环境、嵌入式系统以及小型应用中。HSQldb 2.3.3是其稳定...
### HSQLDB中文帮助文档知识点总结 #### 一、HSQLDB概述 - **定义**:HSQLDB(HyperSQL Database)是一款轻量级、开源的纯Java SQL数据库管理系统。它能够作为嵌入式数据库使用,也可以作为一个独立的服务器运行。 ...