so I've read
numerous times in the past couple of weeks on topics along the lines of in order to get good at something, do it a lot
so here I am Writing. My topics will be many and varied, as I have lots of
interests, from goat farming to electromagnetic rail launches for satellites,
AI and Machine learning to hapkido and meditation, politics to religion (2 of
my favorites at parties).
Today I think I'll
write, a bit about two AI./ Machine /Deep learning courses I'm Taking from coursera.org. These are interesting so far, one
feels a bit light on algorithms, but has said it will get more in depth later.
What I find vexing is that what they all seem to be using is the same couple of
ideas, neither of which Could be Called new.
One is back propagation in order for these algorithms to be (somewhat) self
learning, and the other is that gradient descent, or some variant of such, is
used to find optimal equations for the data. I'm not a mathematician by any
stretch , but I wonder if there are it other, better ways to fit the data. I
wonder if quantum computing algorithms would make these processes foster, it
would Seem applicable at fist glance with my limited knowledge of QC on the
other Concern, back prop being the current backbone of AI, I can't think of an
other Method of automated feed back
Maybe using another algorithm to Complement, A kind of Automated supervised
learning to handle tweaking the meta parameters used in the process.
Well,
that's it for this time, forgive any
horrendous typos as I'm writing this On my surface Pro and will be using the
writing to text feature ☺
Judging by this I
will definitely need to work on my handwriting or just type.