Objective-less objective 🧩
Can you think of any instance in your way of problem-solving that's not objective-driven?
Sometimes the problem seems very complex in an environment, does our perspective towards the problem change if we end up manipulating the objectives towards the already defined solution in the same environment?
Do program and process are two varied ways of contributing towards innovation?
In the machine learning or NLP domain, I always relate to fantasizing about results pretty quickly! I was going through a talk by Prof Kenneth Stanley on “Why Greatness cannot be planned”
The solutions we are trying to work on a problem are always a crosswalk in technical details, philosophical insights, and street wisdom! Neuroscience, AI, and philosophy mutually inform each other in various aspects.
Inspiration for problem-solving is always multi-dimensional — it may boil down to science, philosophy, art, and mathematics.
The idea that we cant discuss tends to be an interesting idea.
The goal for me might not be “to win”, but to keep it constantly interesting and fun!
The intuition of creating a concept comes from our evolution, exposure, experience, how we process the experience, factors we feel appreciated enough, and things we were complaining about.
DATA quo: One of the interesting quotes by Shannon is, We have knowledge of the past that we can't change it, we don't have knowledge about the future that we control it, but that's the way systems tend to operate.
Forcing ourselves to get an accurate depiction might lead us into a cycle of perfecting the objectives.
Does the intelligence of systems in the engineering pipeline come along the process or the idea?
Listing down the perspectives I learned or got used to when I started machine learning:
- I always had only one perspective towards deep learning and machine learning paradigms — approximating the objective function that is differentiable ie objective optimization. This is the crux of most algorithms and is applied over a swath of problems.
- Problems that ought to converge.
- Everything is objective driven — Dominance of objectives.
- The convex hull of modularizing all pursuits within objectives.
- As an artist, mapping mental design to its realization is an objective.
- The objective is a finite boundary to exploration and it's a security blanket towards moderated or quantifiable work.
- A hypothesis space or search space that’s already defined.
- Scope of measurement of gradient or defining policy mechanisms and rewards to track the journey towards objectives.
- Assuming local objectives as a function or assurance factor of ambitious objectives
- Objectives as a pavement to achieve “true happiness”
- Unpredictability as a “rule-out” or an “Exception” in objective.
- Not thinking like “what you would never do is what you should have done”
- Obscuring or blinded away by what stepping stones had to offer on our plan of the path to the objective.
- Defining “interestingness” — to switch to the thinking of “Not Everything that’s novel is interesting but everything that’s interesting is novel”
- The objective is tied very much to productivity, what the company requires — kind of manipulating the path.
- There is a combination of search for interestingness coupled with objective optimization in every problem we solve.
- The problem is defined with a maximum likelihood estimate or an objective in order for the capital investment to be brought in.
- Scope of subjectivity and intuition not explored unless proved with evidence.
- The assumption that the objective function and the final outcome are tangential, stepping stones to exceptional work resembles exceptional work.
- Not ready to trade in a slight decrease to prepare for the big increase down the line.
- There is a constant search of local optima in order to gauge if we are moving in the right direction.
- Search is always in the space of all possible things.
- The estimation of a further step in machine learning algorithms is based on the direction of the gradient and performs optimization in that direction.
- Reinforcement algorithms tend to focus on a random search without the notion of interestingness in the search.
- Language modeling — We perform greedy searches over the probabilistic distribution of the vocabulary which may end up in a cycle.
- There is always a method for convergence towards local optima, objective, minimizing the loss.
- Neural networks tend to revolve in the space of solutions. Is there a representation space of the problem that makes them meaningful forever?
- Comparing where you are to where you want to be in the future vs Comparing where you are to where you were in the past.
- Assuming Novelty search is a random process. Fading away the information accumulation.
- keeping in radar the gradient of optimization alone and ignoring the gradient of novelty.
- Causal reasoning vs statistical correlation.
- Trusting significance over intuition.
- There is a trade-off between being in a committee and being in autonomy, being reasonable vs objective.
- A voting system that is objective-driven takes to final outcome agreeable.
- People see context shifting as a loss of focus, but it really brings different angles of argument.
- Knowing what the final objective and kind of optimizing on that restricts what possibilities it may end up producing (more to the deception than on the problem structure)
- Is it worth categorizing the behavior into objectives?
- An easy way to get off the problem on the radar is by saying it lacks an objective measure.
How does the world accept me as an ML engineer — It might not be the objective I decide, it’s the never-ending objective world is augmenting it.
Perspectives I wish that could push me to explore another space that may not be immediately productionalized or included in the ecosystem of engineering but a view that would pull me from delusion on the immediate results:
- Training generic way might deviate into a space that we don't find interesting.
- Divergence
- Preserving the diversity
- Collecting stepping stones
- Apart from the supererogatory objective, including open-endedness(this somewhat has to be measurable and agreeable with a moderated policy). I like the open-ended when there is a concept of “playground” introduced that gives a scope of exploration first.
- Consideration of Environment
- Thinking about the area of exploration vs exploitation — convergence vs interestingness
- It's not only about generating solutions to a problem, it comes along with new problems and opportunities at the same time.
- A great example — is the Chinese finger trap — The complete opposite(pushing it inside) to obvious intuition(pulling it further) is the one that helps to come out from the trap.
- Focus on novelty search — Kind of accumulates the information on which optimization can be done directly. A variant called population search — we search through the population of solutions that we develop over time.
- Definition of Interestingness and Curiosity — to add a scope of subjection notion in intelligent systems.
- Open-endedness — In analogy to evolution — there is not a single algorithmic process, everything came in diverse at the same time. There is unbounded open-endedness in something around us to which we are not giving attention.
- The idea of having a creative process in general intelligence rather than having an approximation process or kind of interpolation among data points.
- How Earth operates — Generate new opportunities and search through them at the same time (Having divergence and self-generating) — Paired Open-Ended Trailblazer.
- Opportunities are many-many relationships. It's a chain of many-many relationships that can go on forever. It's because the target audience might learn something I write, Is because I write, and the target audience tends to read about it.
- If there are thousands of systems that are objective, in aggregation the whole system might become less objective.
- Evolutionary algorithms: Novelty search + Random cross-over = Divergence.
- Learning as we progress — meta-learning — Having an infinite amount of objectives as we go towards the search process. When the objectives become very diverse, the system becomes flexible with no rigorous objectives and divergence happens continuously.
- We can solve if we keep exploring the known vs we can innovate if we explore the unknown.
- Exploration of branching factors and not focusing on one branch at a time.
- Brainstorm if evolutionary-based algorithms and topology help to solve the problem(delineation between just an augmentation vs innovation)
- Certain situations are serendipitous rather than yet another investment of time.
- Coming up with a new status quo that's not a naive search optimization problem of possibilities.
- Principle of allocating funds to novelty search.
- Finding the right balance and degree of how we find interestingness over the objective.
- It's not always true greatness is always the byproduct of exploring the interestingness, but it's a good profile when compared to local optima.
- Open discussions on why we see certain things are interesting with the finite resources and constraints we might have.
- The notion of pruning the junk to produce enough cool stuff along the way of discovery.
- Important to incorporate the search paradigm in the problem space.
- Provoking the discussion of ideas vs discussion of possible solutions
- Should the focus of deep learning be towards generality or open-endedness or hyper-specialists (making seemingly moderated resourceful assumptions of the environment)?
Open Questions that arose in my mind:
- Presumably, my mind is trained to define everything whether it may be ambition, dreams, or goals.
- My mind is looking at the page number as a reward before even completing the book without thinking about the possible unbounded executions it might create.
- Does reading more about NLP move me closer to mastering NLP?
- What differentiates the machine learning algorithms and human-level comprehension is the objective function (that establishes the statistical correlations). Everywhere the context closely resonates with the objective function.
- Can I solve a problem without taking into consideration preconceived results?
- Does solving one objective after another take us seemingly to a solution rather than greatness in discovery?
- I personally see an unawareness of dominance in a bucket of objectives on a single day. Am I reward-driven for small rewards?
- Thinking if I have encountered a scenario where objective blocks discovery?
- Do we really try to formalize a problem that tends to have subjectivity?
- I know what the ideal understanding system needs to be, Can I plot these latent capabilities and amplify them with AI assistance?
- We are the creators of problems and solutions. The existence of one thing gives an opportunity of having things that can solve a problem. Things came into existence because of the problem.
- It's true that natural evolution is successful without a specific goal or objective defined. There is only an underlying effect of reproduction.
- How can I discover the knowledge that I need for the work or a project? Can I find new problems and new solutions parallelly?
- I find writing interesting, it might lead to lots of open-ended opportunities. I think of it as a fertile agricultural process of discovering something each day. This is true open-endedness.
- Someway I do feel greatness is constrained by the way we are put to design and objective-driven thinking.
- How can we convince people that if a person finds it interesting and has an argument towards plenty of opportunities it may open up, creating a new fertile ground that we didn’t even think about it?
- How can I find a connection who finds my stepping stone interesting?
- Being an artist, did I acknowledge it and put its inclination to use in the aspect of science?
Matter of subjectivity in conversational AI and NLP:
- We are transforming the space to formalize a problem of discourse into certain objective functions that rate the system to be performant or not
- We often think of the space of language as a representation problem evolving to a number of embeddings and representations rather than a missing information problem involving aspects of linguistic compositional semantics of information.
- Language understanding is categorized as levels of solving the clearly defined problem, finding the problem, finding areas, find the people who can find areas.
- There is always a constant shift between representation(we are not able to verbalize it) and missing information problems (we don't know yet) in the phase of NLP.
- Divergence too far assembling random language understanding components on top of each other would lead to total randomness which will not be interesting.
- There is a lot of resonance criterion involved in facilitating discussions on what resonates as a consumer of the product.
- Does scaling capture the subtle nuance in capturing the complexity of building intelligence systems?
- Natural Language Understanding is not tied entirely to semantic embeddings as a representation primitive. What does it mean for a system to understand a language?
- The trade-off between understanding and reasoning — I think we should call systems Natural language reasoning systems rather than natural language understanding systems. Ultimately we are deriving new knowledge from existing knowledge (training data) and other existing experiences inferred coming under the notion of semantic structures enforced. There could be the possibility of natural language mapping systems given enough prior distribution, mapping an unknown sentence to the existing distribution.
- NLU systems are often binary — if we understand the intent or make it a fallback intent. There is not much of a discovery of extrapolation of missing information that could be deciphered.
Resonating what I do know with what I loved most: (AI and Art)
There is also an interesting connection towards art, where we are trying to generate or reproduce something that may not be accurate but it's just a different version of a real-life story.
Taking a natural artifact and working on reproducing a phenomenon in artificial space considering some aesthetics that are subjective in nature.
When people say art is a masterpiece, what kind of intelligence and thought process is embodied that’s unbounded by rewarding falsified art objectives?
Mathematics, Science — both have an artistic view that might boost creativity and innovation.
Amazing train of the thoughtful journey I had hearing the podcast of Dr.Tim on MLST with Prof . Kenneth Stanley.
These perspectives are not explicit as we start our journey in Machine learning but these nuanced perspectives and insights are a kind of treasure hunt as Prof Kenneth referred to.
I hope you enjoyed reading it!