Patent Auction lists patented inventions available for sale or licensing. Inventors can list their patented inventions (or patent pending) for sale. New inventions for sale are added on a daily basis !
Home    List your patent    My account     Help     Support us     Contact us    

Question Answering Syste

[Category : - SOFTWARES]
[Viewed 322 times]

BACKGROUND rnrn

The present invention relates to a question answering system, where a user can parse a document through the system and the system will deduce all the information contained explicitly and implicitly within the document. In particular the user can the make queries to the system to extract new knowledge that was not explicitly found in the document. The system actually performs proofs with the statements in the document as hypothesis and using the rules of propositional and predicate logic. rnrn

SUMMARY OF THE INVENTION

rnrnThis invention uses LSTM neural networks that are trained with natural language examples of the rules found in first order and second order logic. There will be a network for each of the different logical rules. Training will take the supervised training format where statements that contain implicit information (further information that can be inferred from them) will be fed into the network and the network is trained to produce the logical conclusions. For example, when the modus ponens LSTM module is fed the following statements; “If I wake up late, I will miss the bus” and “I wake up late”. The network will out put the following conclusion ;” I miss the bus”. This is just one propositional logic rule being shown. There will be a separate network for each logical rule. Note that if a statement that is not relevant to the particular logic an LSTM module works with, the statement will produce garbage as a result. rnOnce trained we run all the training examples through the relevant LSTM module once more. This time we take note of the sequence of firings of the neurons in the network as it infers some fact. A matrix of values is then generated, one entry for each neuron in the LSTM network showing its status as a 1 if it fired and a 0 if it did not. rnIf we did this for the modus ponens module we would get a series of matrices, one for each example from our training set. It is here that we need a bit of linear algebra. What we would like is that when we parse a document through the modus ponens module each statement will form a matrix with the values of which neurons fired as it was interpreted. Some of those statements will produce garbage and we want a way to identify those statements that produce meaningful information. To do this we take the matrices formed by the legitimate examples of the logic being modelled and perform a change of basis to transform these matrices (multi-dimensional vectors) into equivalent vectors in the vector space that we are changing the basis to. So all legitimate examples of modus ponens will possess the same vector, just at different positions within the new vector space. Then during operation we will parse a whole document through each of the modules, noting where we see the vector that represents meaningful information. Note that the statements that produce garbage will not form the same vector as we will arrange that the change of basis only allows legitimate modus ponens (for example) inferences to form that vector. This could be done by mapping all the training examples matrices to each other and using this mapping as a basis for the change of basis matrix. rnNote also that even if the information needed to make a logical conclusion is in different parts of the document, both fragments will align in the new vector space to form the vector we are looking for. rnDuring operation we will parse the whole document through each module picking of the vectors that represent meaningful information. We will then work backwards to identify which statements were involved in making that vector and then feed them once more in the module as a unit to get the required information. Note that a statement derived from one logical operation may need to be combined with another one in another logical module, so we send the results of each module to every other module for analysis once it is generated. This makes it possible for the system to perform whole proofs with all the statements found in the document as hypotheses. rnThe next part of this invention concerns itself with the process of extracting the information found. rnThere will be another LSTM network that is trained to do the following. Upon being given a statement, it will generate all the possible questions that that statement could have been the answer to. This network would have been trained with pairs of questions and answers, where it converts answers into questions. rn
Once a vector has been analysed by the logical modules, those statements generated as conclusions of proofs or stages of a proof, are then fed to the question producing LSTM in order to produce questions. rnrnFinally a final LSTM network will the collect all the pairs of questions and answers and be trained to answer those questions. rnThat makes it possible after parsing a document to query the last LSTM network with queries concerning the document.







[ Home | List a patent | Manage your account | F.A.Q.|Terms of use]

Copyright PatentAuction.com 2004-2017
Page created at 2019-10-22 8:57:57, Patent Auction Time.