by Emna Jaoua
Last Updated July 04, 2018 10:19 AM

I have implemented an LSTM model that have 2 LSTM layers, a dropout layer and a dense layer for predictions. I trained my LSTM model on 1000 XML files. Each file has 4 main markups with very simple fields in between the markups. My training data was acquired using a list of sequences. The sequence length is 3 and the step window is 1.

As model parameters, I have set :

learning rate: 0.001

batch size: 65

Number of iterations: 20

So for predictions, I give my model 3 words which are 3 XML markups chosen randomly and the model should generate the 10 next markups. What I don't understand is why my model predicts accurate results when the input seed is chosen randomly while when I give it a constant input seed, it does not predict accurately.

- ServerfaultXchanger
- SuperuserXchanger
- UbuntuXchanger
- WebappsXchanger
- WebmastersXchanger
- ProgrammersXchanger
- DbaXchanger
- DrupalXchanger
- WordpressXchanger
- MagentoXchanger
- JoomlaXchanger
- AndroidXchanger
- AppleXchanger
- GameXchanger
- GamingXchanger
- BlenderXchanger
- UxXchanger
- CookingXchanger
- PhotoXchanger
- StatsXchanger
- MathXchanger
- DiyXchanger
- GisXchanger
- TexXchanger
- MetaXchanger
- ElectronicsXchanger
- StackoverflowXchanger
- BitcoinXchanger
- EthereumXcanger