Friday, April 15, 2016

On Facts, Numbers, Emotions and Software Releases

A recent study published in Science Magazine looks at communication, opinion, beliefs and how they can be influenced, in some cases over very long terms, by a fairly simple technique: open communication and honest sharing.

What makes this particular study interesting is that it was conducted by two researchers who attempted to replicate the results of a previous study also published in Science Magazine on the same topic. The reason they were unable to do so was simple: The previous study had been intentionally fraudulent.

The results of the second study were more astounding, in some ways, than the previous study. In short, people are influenced to the point of changing opinions and views on charged, sensitive topics, after engaging in non-confrontational, personal, anecdotal-based conversation.

The topics viewed included everything from abortion to gay and transgender rights. Hugely sensitive topics, particularly in the geographic areas where the studies were conducted.

In short, when discussing sensitive topics, basing your arguments in "proven facts" does little to bring about a change in perception or understanding with people with firmly held and different beliefs.

Facts don't matter.

Well-reasoned, articulate, fact-based dissertations will often do little to change people's minds about pretty much anything. They may "agree" with you so you will go away, but they really have not been convinced. There are scores of examples currently in the media, I won't bore (or depress) anyone (including myself) with listing any of them.

Instead, consider this: Emotions have a greater impact on most people's beliefs and decision making processes than the vast majority of people want to believe.

This is as true for "average voters" as it is for people making decisions about releasing software.

That's a pretty outrageous statement, Pete. How can you honestly say that? Here's one example...

Release Metrics

Bugs: If you have ever worked at a shop, large or small, that had a rule of "No software will be released to production with known P-0 or P-1 bugs" it is likely you've encountered part of this. It is amazing how quickly a P-1 bug becomes a P-2 bug and the fix gets bumped to the next release if there is a "suitable" work-around for that.

When I hear that, or read it, I wonder "Suitable to whom?" Sometimes I ask flat out what is meant by "suitable." Sometimes, I smile and chalk that up to the emotion of the release.

Dev/Code Complete: Another favorite is "All features in the release must be fully coded and deployed to the Test Environment {X} days (or weeks) before the release date. All code tasks (stories) will be measured against this at the quality of the release will be compared against the percentage of stories done of all stories tasks in the release." What?

That is really hard for me to say aloud and is kind of goofy in my mind. Rules like this make me wonder what has happened in the past to have strict guidelines in place.I can understand wanting to make sure there are no last-minute code changes going in. I have also found changing people's behaviors tends to work better by using the carrot - not a bigger stick to hit them with.

Bugs Found in Testing: There is a fun mandate that gets circulated sometimes. "The presence of bugs found in the Test Environment indicates Unit Testing was inadequate." Hoo-boy. It might indicate that unit testing was inadequate. It might also indicate something far more complex and difficult to address by demanding "more testing." 

Alternatives?

Saying "These are bad ideas" may or may not be accurate. They may be the best ideas available to the people making "the rules." They may not have any idea on how to make them better.

Partly, this is the result of people with glossy handouts explaining to software executives how their "best practices" will work to eliminate bugs in software and eliminate release night/weekend disasters. Of course, the game there is that these "best practices" only work if the people with the glossy handouts are doing the training and giving lectures and getting paid large amounts of money to make things work.

And when they don't, more times than not the reason presented is because the company did not "follow the process correctly" or is "learning the process." Of course, if the organization tries to follow the consultant's model based on the preliminary conversations, the effort is doomed to failure and will lead to large amounts of money going to the consultant anyway.

Consider

A practice I encountered the first time many years ago, before "Agile" was a cool buzzword was enlightening. I was working with on a huge project as a QA Lead. Each morning, early, we had a brief touch point meeting of project leadership (development leads and managers, me as QA Lead, PM, other boss-types) discussing what was the goal for the day in development and testing.

As we were coming close to the official implementation date, a development manager proposed a "radical innovation." At the end of one of the morning meetings, he went around the room asking the leadership folks how they felt about the state of the project. I was grateful because I was pushing hard to not be the gatekeeper for the release or the Quality Police.

How he framed the question of "state of the project" was interesting - "Give a letter grade for how you think the project is going where 'A' is perfect and 'E' is doomed." Not surprising, some of the participants said "A - we should go now, everything is great..." A few said "B - pretty good but room for improvement..." A couple said "C - OK, but there are a lot of problems to deal with." Two of us said "D - there are too many uncertainties that have not been examined."

Later that day, he and I repeated the exercise in the project war-room with the developers and testers actually working on the project. The results were significantly different. No one said "A" or "B". A few said "C". Most said "D" or "E".

The people doing the work had a far more negative view of the state of the project than the leadership did. Why was that?

The leadership was looking at "Functions Coded" (completely or in some state of completion) and "Test Cases Executed" and "Bugs Reported" and other classic measures.

The rank-and-file developers and testers were more enmeshed in what they were seeing - the questions that were coming up each day that did not have an easy or obvious answer; the problems that were not "bugs" but were weird behaviors and might be bugs; a strong sense of dread of how long it was taking to get "simple, daily tasks" figured out.

Upshot

Management had a fit. Gradually, the whiteboards in the project room were covered with post-its and questions written in colored dry-erase markers. Management had a much bigger fit.

Product owner leadership was pulled in to weigh in on these "edge cases" which lead to IT management having another fit. The testers were raising legitimate questions. When the scenarios were being explained to the bosses of people actually using the software, they tried it. And sided with the testers and the developers: There were serious flaws.

We reassessed the remaining tasks and worked like maniacs to address the problems uncovered. We delivered the product some two months late - but it worked. Everyone involved, including the Product Owner leadership who were now regularly in the morning meetings, felt far more comfortable with the state of the software.

Lessons

The "hard evidence" and metrics and facts all pointed to one conclusion. The "feelings" and "emotions" and "beliefs" pointed to another.

In this case, following the emotion-based decision path was correct.

Counting bugs found and fixed in the release was interesting, but did not give a real measure of the readiness of the product. Likewise, counting test cases executed gave a rough idea of progress in testing and did nothing at all to look at how the software actually functioned for the people really using it.

I can hear a fair number of  folks yelling "PETE! That is the point of Agile!"

Let me ask a simple question - How many "Agile" organizations are still relying on "facts" to make decisions around implementation or delivery?

Saturday, March 5, 2016

On Visions and Things Not There

When I was playing in an Irish folk band, one thing we did each March was visit elementary schools and play music and talk a bit about Ireland in an attempt to get away from the image of dancing leprechauns and green beer and "traditional Irish food" like corn beef and cabbage.

One year, we were playing for a room full of kindergartners when one of them asked "Are leprechauns real?" The teacher smiled and chuckled a bit and for some reason, the other four guys in the band looked at me and one said "This one is yours Pete." 

I looked at the little girl who asked the question and said "Just because you don't see something does not mean it is not there." This made the teacher smile and nod. It also got us out of a pickle.

A few days ago, our tomcat, Pumpkin, was staring intently at something neither my lady-wife nor I could see. He was clearly watching something, and it was moving. He looked precisely as if he was stalking something. My lady-wife asked if I knew what he was watching - I had no idea.

Now, we live with three cats in the house. All of them, at different times, will watch something very intently. The fact that the humans could not see anything did not matter in the least.

Software is a bit like that. You know something is wonky and you can stare all that bit all day knowing something isn't right. And not see a blasted thing.

You know something is there. You see bits that don't seem right. No one else seems to see it. You see odd behavior and sometimes you can recreate it - but often, you repeat the same steps and ... nothing is there.

So you keep looking. You might find it. You might lose interest and move on. I find it a good idea to write myself a note on what I saw and what I thought might be factors in the behavior.

Because it is likely to come back again.


Monday, February 29, 2016

On Testing and Quality Engineering

The other day I read an article on how Quality Engineering was something beyond testing. It struck me that, in the course of reading that article, it struck me that the author had a totally different understanding of those two terms.

Here then, is my response...



On Testing and Quality Engineering

A common view of testing, perhaps what some consider is the "real" or "correct" view, is that testing validates behavior. Tests "pass" or "fail" based on expectations and the point of testing is to confirm those expectations.

The challenge of introducing the concept of “Quality” with this conception of testing brings in other problems. It seems the question of "Quality" is often tied to a "voice of authority.” For some people that "authority" is the near-legendary Jerry Weinberg: "Quality is value to some person." For others the “authority” is Joseph Juran: "fitness for use."

How do we know about the software we are working on? What is it that gives us the touch points to be able to measure this?

There are the classic measures used by advocates of testing as validation or pass/fail: 
·         percentage of code coverage;
·         proportion of function coverage;
·         percentage of automated Vs. manual tests;
·         number of test cases run;
·         number of passing test cases;
·         number of failing test cases;
·         number of bugs found or fixed.

For some organizations, these may shed some light on testing or on the perceived progress of testing. But they speak nothing about the software itself or the quality of the software being tested, in-spite of the claims made by some people.

One response, a common one, is that the question of the “quality of the software” is not a concern of “testing,” that it is a concern for “quality engineering.” Thus, testing is independent of the concerns of overall quality.

My view of this? 

Hogwash.

Rubbish. 

 When people ask me what testing is, my working definition is:

Software testing is a systematic evaluation of the behavior of a piece of software,
based on some model.

By using models that are relevant to the project, epic or story, we can select appropriate methods and techniques in place of relying on organizational comfort-zones. If one model we use is “conformance to documented requirements” we exercise the software one way. If we are interested in aspects of performance or load capacity, we’ll exercise the software in another way.

There is no rule limiting a tester to using a single model. Most software projects will need multiple models to be considered in testing. There are some concepts that are important in this working.

What does this mean?

Good testing takes disciplined, thoughtful work. Following precisely the steps that were given is not testing, it is following a script. Testing takes consideration beyond the simple, straightforward path.

As for the idea of “documented requirements,” they serve as information points, possibly starting points for meaningful testing.

Good testing requires communication. Real communication is not documents being emailed back and forth. Communication is bi-directional. It is not a lecture or a monologue. Good testing requires conversation to help make sure all parties are in alignment.

Good testing looks at the reason behind the project, the change that is intended to be seen. Good testing looks to understand the impact within the system to the system itself and to the people using the software.

Good testing looks at these reasons and purposes for the changes and compared them to the team and company purpose and values.  Are they in alignment with the mission, purpose and core values of the organization? Good testing includes a willingness to report variances in these fundamental considerations beyond requirements and code.

Good testing can exercise the design before a single line of code is written. Good testing can help search out implied or undocumented requirements to catch variances before design is finalized.

Good testing can help product owners, designers and developers in demonstrating the impact of changes on people who will be working with the software. Good testing can help build consensus within the team as to the very behavior of the software.

Good testing can navigate between function level testing to broader aspects of testing, by following multiple roles within the application and evaluating what people using or impacted by the change will experience.

Good testing can help bring the voice of the customer, internal and external, to the conversation when nothing or no one else does.

Good testing does not assure anything. Good testing challenges assurances. It investigates possibilities and asks questions about what is discovered.

Good testing challenges assumptions and presumptions. It looks for ways in which those assumptions and presumptions are not valid or are not appropriate in the project being worked on.

Good testing serves the stakeholders of the project by being in service to them.

What some people describe as “quality engineering” is, in my experience, part of good software testing.

 

Monday, November 23, 2015

On Motivation, part 2

As the discussion I was having with the Unicorn at the coffee shop was winding up, a fellow I worked with a few years ago came in looking rather, frazzled. He joined us, although he looked rather askance at the unicorn. We made small talk for a bit. He had been promoted some 6 months before to a manager position and seemed frustrated.

The reason he seemed frustrated eventually seeped out. He was frustrated. He was trying to get "his resources" to "engage" in some new methods of doing things.

About this time, the unicorn bowed out and excused himself. I'm not sure this fellow even noticed him sitting at the table.

When we had worked together, he struck me as one who was perpetually looking to make a mark in some way. He always acted as if he knew better than anyone on the team or in a discussion on addressing any problem. He made sure that he offered advice to team leads and managers on how to address a problem - which normally involved wholesale changes to bring whatever was under discussion to be brought in line with whatever his set of beliefs were at the moment.

Funny though - his "beliefs" tended to shift. I'm not sure why.

It almost was as if he looked at whatever the situation was - and decided it needed to be different. Why things were the way they were or how they got that way did not seem to matter.

He deemed them valueless and needed to be completely replaced.

I got pretty tired of it after a while. When I was moved to a different group following a reorganization (yeah, these guys did that every 6 months or so) I did not miss the turmoil or drama of someone ranting about how screwed up things were.

Back to the coffee shop...

So, the fellow was trying to get "his resources" to "engage" in new methods of doing things. The challenge was that people were pushing back. They had always grumbled. Now, they were refusing "to cooperate."

And he was frustrated.

So I took a deep breath and tilted my head, just so, and asked "The processes that were in place before, the ones you replaced. Why were they implemented?"

I think he wanted to glare at me. Actually, I suspect he wanted to punch me. Instead, he said, "Look. This is stupid. I know what needs to be done and how things should be. And they just don't want to do it."

And I sipped my coffee and asked, "Remember when we used to complain about the 'policy du jour' and every 6 months everything changed, unless a new manager rolled in sooner than that? Remember how we used to kvetch about things changing for no apparent reason?"

He glared at me. Frankly, I think he thought I wanted to hit me. (That is funny to people who have met both of us.) "Look," he says, "the problem is these people just don't want to embrace anything new. It is not me or my problem - it is them."

He left the coffee shop. I suspect it may be a while before he goes to that coffee shop again.

The Problem

I suspect that is a pretty good summation of the view of people - managers, directors, VPs, dictators, whatever - "It is not me, it is them."

The irony is, in my experience, the first and foremost rule of anyone looking to change or improve things is - Learn and Understand how things got the way they are.

It is rarely as straight-forward as some would have it. Problems exist - Processes exist - Processes are normally introduced to address specific problems. Other problems may not be addressed by the changes, but, these are usually judged to be lower priority than the ones that are being addressed.

So, new Managers, Leads, VPs, Directors, Bosses... whatever - Before you make changes, I have found it to be a really good idea to take the time in how the organization got there. Even if you "watched" the "mistakes" happen - it is unlikely you were in the discussions that looked at the needs, the problems and the alternatives that got you to where you are.

Motivation?

If you want your "resources" to "get on the bus" and support you, I suggest you take the time to learn these things. Without doing so, it is almost certain that the people you expect to do the things you are mandating, will give your direction and instructions the appropriate level of effort and dedication.

None at all.

Because, when you move on, all these changes will be changed, and nothing will really change.

So, what is the motivation you have to make changes? Are you trying to "make your mark?" Or are you trying to do what is right for the organization?

Friday, October 30, 2015

On Motivation, part 1

I recently wandered into a neighborhood coffee shop for a little defocusing - and some of their Kenyan roast coffee and a fresh scone. While in line to place my order, my friend the unicorn walked in.

We had not intended to meet, it was just a happy chance. We sat down with our respective coffee and began talking. As happens sometimes, the 'catching up' developed into talking about something of interest. In this case, we found ourselves talking about motivation. We quickly set aside the stuff about "motivating people" and turned to forms of motivation - what motivates, maybe inspires, people to do work.

Most technical people we know who seek advancement and promotion into leadership or management positions fall into a few groups. Now, this isn't a terribly scientific study, just what the unicorn and I have seen.

There are the folks who really don't want to manage people and like having their hands dirty - they like the technical challenges that come with bigger titles and pay-grades.

Then there is the other major group - They want to lead beyond a technical perspective. They want to be "in charge"

The first type - These are the same type you find in very technical enlisted roles in the military - they soar through ranks at lightning speed. They display astounding prowess at tasks that others cannot comprehend. They show others how to do things, then dive in next to them in the doing - teaching their juniors what they are doing, how and why. They leave officers shaking their heads at how astoundingly well they do their jobs.

Until they get to the level where they "supervise" others. Then they don't get to do what they really like doing. Then they watch other people do what they want to be doing. And the longer they are in, the higher the rank they achieve and the further they get from doing what they truly want to do. So they leave - they don't reenlist.

In Corporate-Land, these same people, if they get assigned or promoted beyond "getting their hands dirty" and doing what they like doing, tend to resign and take another job. 

The second type - These are the folks who want to get into "leadership" positions. They are the movers and shakers and the up-and-comers in the organization.

Some folks have a negative view of everyone who is in this second, broad group. Neither the unicorn nor I can really fault people for having ambitions or desires. Nor could we really find fault with people wanting to get ahead and move up the ladder.

After all, if they are reasonably competent in technical roles, maybe - just maybe - they will remember what it was like in those roles as they move up in the organization chart.

For me, when dealing with managers or directors or other boss-types, I find it helpful if they have some appreciation of the challenges of the work done by technical folks, be it developers, DBAs, testers, whatever. While they may not be able to help from a technical perspective, they may be able to offer assistance in other ways, for example, running interference with other, less technical managers or functionaries.

People growing into roles that challenge them is an excellent thing. It is a desirable thing in my mind. Granted, the roles I have moved into have not been management ones. My forays into management have convinced me that I do not have the right "makeup" for managing others.

I salute those who do have that makeup and make full use of it. Indeed, I salute those managers who are motivated to manage others well, and help those they manage discover what it is that motivates them.

A third type - These folks who want to get into "leadership" positions for reasons I find to be less than honorable. Maybe you have heard that "Power Corrupts." I find the question of why one seeks power to perhaps shine a light on just how true that is, or is not.

Some people have something less than altruistic motives. Some desire high rank for achieving their own ends - their own self-aggrandizement. In these instances, I suspect the corruption has already occurred - and the quest for power is, in fact, the motivation.

The unicorn blinked at me.

He said something to the effect that people have their own motivations. He chuckled (a scary sound, frankly) at the thought that some of these sounded like Death Eaters. I did stop a moment and consider.

I was reminded that individual people are motivated by different things and these generally are internal to each of them. Their motivation drives their choices and how they work, just as mine do.

I can accept or reject those motivations and actions based on my values and what I hold important.  I can also choose to not associate with those whom I find I can not support.

Thursday, October 15, 2015

On Service, Servants and Software

A few weeks ago, I was having a glass of wine with a couple of colleagues one evening discussing the role of Software Testers in developing good and well-performing software. There were some of the oft-stated lines about "not running the project" and "not owning quality" and "you can't test quality in" and so forth. We dismissed them as trite and irrelevant.

Where we landed was:
Software Testers serve the needs of the project and support other participants and stakeholders.

This brought up ideas around how the above can be interpreted. One person made an observation of a distinction between "service and servants" - one apparently triggered by something he heard or read and could not recall the source.

This sent us into the question of what "service" meant. Being who we were, we wandered off to distant times to discuss this idea. Of old, the Samurai of Japan were in service to others - at least in theory. In Europe, knights held their position through service to their lords. In both cases, it was possible to lose one's station - that bit often gets left out of the romantic stories. The romantic stories tend to overlook some parts and emphasize others. Reality was never as neat and tidy as stories, books and movies would have it.

Still, we looked at the idea of serving others.

Testers are in service to others. 

Then again, software developers are also in service to others. As are others in the various roles around software development. As are those whom we develop software for. The needs being addressed usually are problems that need to be addressed to support those whom they, in turn, serve.

People serve people who serve people.

Most folks, in their working life, serve someone else. We don't want to admit it, but unless we who are paying others - who serves whom? How do we get the money we earn?

The simple fact is, being in service and serving others are closely linked. I find people who object to being "servants" are people I cannot completely understand. When they object to being "servants" I look to see how they treat those who serve them.

It dawned on me in that discussion that there was much some people could learn about service, servants and servitude from people who are in service. Most Americans don't really know what that means or entails, to be "in service."

Perhaps that is part of the problem.

Some people see people who serve others as some form of lower life than they are. They have adopted a Victorian or Edwardian view of "station in life" - maybe they watched too many episodes of "Upstairs/Downstairs" or "Downton Abbey."

They see the films or shows where the household staff (servants) turn and face the wall when the family in the house, the ones they are "in service" to, pass them in the hall or stairway.

So now these folks treat waitstaff at restaurants as inferiors. They also tend to look down on hotel staff, flight attendants, sales clerks, construction workers, the simple minded, physically (and mentally or emotionally) handicapped, emotionally damaged, traffic cops, TSA agents, teachers, administrative assistants, clerical staff, med techs, nursing assistants, gardeners, Mexicans, Asians, Indians, or any other they see as beneath them.

I suspect, when one has such a superior opinion of themselves and a low opinion on lesser beings, that it is easy to look down on others - that the thought of being looked down on by others is repugnant.

If we, as testers, serve others, does that make us lesser beings?

Does that make us inferior?

Hardly - unless your ego is so fragile that it can't handle the simple idea stated above.

I've been in software for longer than some folks with such attitudes. I know as a developer, business analyst, project manager or software tester, my role exists so I can be of service to... someone else.

As a person in software development, I am a servant to a broader purpose. My purpose is to aid the project, make the software better, and by extension, make the company better.

Yes, Testers provide a service.

We are in service.

We are servants.

We serve for the betterment of our organizations, our craft and ourselves.

We are second to none.

Saturday, September 5, 2015

How Advice to Beginning Pipe Bands also Works for Testers

When I was in demand to teach drumming workshops for pipe bands, I used to wrap up weekend sessions with new, or new-ish, bands with a session for both pipers and drummers. Usually, after they have had a practice together.

The typical format for these workshops was generally straight forward. I'd roll in Saturday morning between 7:30 and 8:00 AM with coffee and a pile of donuts and bagels (something like at "Take 10" from Tim Hortons, where an entire pot is in a take-away box with a stack of cups and sugar, milk, cream, etc.,) and a mix of donuts. People would start showing up a bit later, find coffee ready - at least enough to hold everyone over until a pot was brewed and ready at the venue. I'd set up the materials needed for the day while folks were chatting and drinking coffee, although some opted for power/energy drinks they brought themselves.

We'd work in sections, according to a pre-defined schedule. For example, beginning snare drummers at 9:00, more advanced snare drummers around 10:30. Break for lunch around 12 or 12:30 and back at it again as soon as possible after that. When I finished with each group, they'd head off and work on their assignments/exercises in a corner or a different room. Early afternoon, I'd work with tenor and bass drummers, typically, around 1:30 or 2:00. Around 3:00 everyone would come together and play exercises as a group - and then work on playing music.

We'd break around 5:30 or 6:00 for supper. After that, while officially "done for the day," usually, I'd work with people one or two at a time - if they wanted to. That would be more informal and very relaxed as we were all pretty tired by then.

Sunday, we'd start a little later, depending on venue and the individual band. We'd spend a lot more time working as a full group, working on the music and exercises from the day before. If someone was having a challenge with a passage, I'd spend a little time with them. If they still needed help, I'd send them off with someone who had it right to work on their own. The emphasis on Sunday was to work as a group, together, to make sure everyone was progressing - and to be ready to play with the pipers when they joined the drummers.

Usually the results were pretty good. The drummers were pleased to have made obvious progress. The pipers were pleased to hear the drummers playing with them and sounding good. At the end of the practice, I'd give a wrap up talk and encourage the drummers in particular and the band in general to keep working.

A typical message was something like:

The band has made huge strides. You all have come a very long way from (some point earlier.) This weekend the drummers have worked their butts off trying to get things just right. There has been really good progress, and there is still more work to do.

At the level of band's development and performance, there are a couple of things to think about and a bunch of stuff to set aside and ignore.  Don't worry about wearing the latest style of uniform. If the band can afford only the basics - kilt, shirt, cap - then so be it. If the kilts don't match - so be it. You are starting out. All that stuff will come. Don't worry about designing band cap badges or patches for the shirts or who wears what insignia. Don't worry if the drums don't match in make or color.

All of that will come in time. All of that will come as you get established and play out in public.

Don't worry about anything anyone else says to or about you. Don't worry about the condescending comments that some pipers or pipe bands will make. Don't worry about what other bands have or what attention they get.

Don't worry about seeking the same attention they get.

Work for yourselves. Practice hard and well. Practice individually. Practice with small groups. Rehearse as a band. Play well with the best sound, tone and execution you can achieve.

That will get the attention of people you want to get the attention of. If you are competing, the judges will know you are a new band and will not worry about details like if the kilts don't match. They will be focusing on what you do as a group. They will listen to how you perform. Perform well and the rest does not matter.

Let others worry about the periphery. You worry, and do something about, your task at hand. Let your fingers and hands do your talking for you.
To testers, I'd like to say something similar.

The people shouting for attention and making a big deal about what they are doing - let them. Truth will out, as is sometimes said. Eventually people will figure out if there is something to their shouting or not.

Don't worry about them.

Don't worry about what they say. Really don't worry about what they say about you.

Let them.

Do what needs to be done to support your organization.

Educate over time by doing good work, then explaining the work you do - gently, in language that is understood by the people asking. Work to develop your understanding - and share the understanding you develop.

Share that with developers you work with, other testers, people who ask at the coffee or snack station or cafeteria.

You don't need to be a name at conferences or on twitter to be able to influence other people and help them learn.

Most people doing good work are doing it quietly, getting the job done and moving on to the next item.

The clanging gongs of people shouting about how cool they are are just that. Noise.

Let the work you do, do the shouting for you.